Skip to content

An extension for oobabooga/text-generation-webui that enables the LLM to search the web using DuckDuckGo

License

Notifications You must be signed in to change notification settings

mamei16/LLM_Web_search

Repository files navigation

Give your local LLM the ability to search the web!

unit tests

This project gives local LLMs the ability to search the web by outputting a specific command. Once the command has been found in the model output using a regular expression, duckduckgo-search is used to search the web and return a number of result pages. Finally, an ensemble of a dense embedding model and Okapi BM25 (Or alternatively, SPLADE) is used to extract the relevant parts (if any) of each web page in the search results and the results are appended to the model's output. llm_websearch

Installation

  1. Go to the "Session" tab of the web UI and use "Install or update an extension" to download the latest code for this extension.
  2. To install the extension's depencies you have two options:
    1. The easy way: Run the appropriate update_wizard script inside the text-generation-webui folder and choose Install/update extensions requirements. This installs everything using pip, which means using the unofficial faiss-cpu package. Therefore, it is not guaranteed to work with your system (see the official disclaimer).
    2. The safe way: Manually update the conda environment in which you installed the dependencies of oobabooga's text-generation-webui. Open the subfolder text-generation-webui/extensions/LLM_Web_search in a terminal or conda shell. If you used the one-click install method, run the command conda env update -p <path_to_your_environment> --file environment.yml, where you need to replace <path_to_your_environment> with the path to the /installer_files/env subfolder within the text-generation-webui folder. Otherwise, if you made your own environment, use conda env update -n <name_of_your_environment> --file environment.yml
      (NB: Solving the environment can take a while)
  3. Launch the Web UI by running the appropriate start script and enable the extension under the session tab.
    Alternatively, you can also start the server directly using the following command (assuming you have activated your conda/virtual environment):
    python server.py --extension LLM_Web_search

If the installation was successful and the extension was loaded, a new tab with the title "LLM Web Search" should be visible in the web UI.

See https://github.com/oobabooga/text-generation-webui/wiki/07-%E2%80%90-Extensions for more information about extensions.

Usage

Search queries are extracted from the model's output using a regular expression. This is made easier by prompting the model to use a fixed search command (see system_prompts/ for example prompts).

An example workflow of using this extension could be:

  1. Load a model
  2. Head over to the "LLM Web search" tab
  3. Load a custom system message/prompt
  4. Ensure that the query part of the command mentioned in the system message can be matched using the current "Search command regex string" (see "Using a custom regular expression" below)
  5. Pick a generation parameter preset that works well for you. You can read more about generation parameters here
  6. Choose "chat-instruct" or "instruct" mode and start chatting

Using a custom regular expression

The default regular expression is:

Search_web\("(.*)"\)

Where Search_web is the search command and everything between the quotation marks inside the parentheses will be used as the search query. Every custom regular expression must use a capture group to extract the search query. I recommend https://www.debuggex.com/ to try out custom regular expressions. If a regex fulfills the requirement above, the search query should be matched by "Group 1" in Debuggex.

Here is an example of a more flexible, but more complex, regex that works for several different models:

[Ss]earch_web\((?:["'])(.*)(?:["'])\)

Reading web pages

Experimental support exists for extracting the full text content from a webpage. The default regex to use this functionality is:

Open_url\("(.*)"\)

Note: The full content of a web page is likely to exceed the maximum context length of your average local LLM.

Search backends

DuckDuckGo

This is the default web search backend.

SearXNG

To use a local or remote SearXNG instance instead of DuckDuckGo, simply paste the URL into the "SearXNG URL" text field of the "LLM Web Search" settings tab (be sure to include http:// or https://). The instance must support returning results in JSON format.

Search parameters

To modify the categories, engines, languages etc. that should be used for a specific query, it must follow the SearXNG search syntax. Currently, automatic redirect and Special Queries are not supported.

Search types

Simple search

Quickly finds answers using just the highlighted snippets from websites returned by the search engine. If you simply want results fast, choose this search type.
Note: Some advanced options in the UI will be hidden when simple search is enabled, as they have no effect in this case.
Note2: The snippets returned by SearXNG are often much more useful than those returned by DuckDuckGo, so consider using SearXNG as the search backend if you use simple search.

Full search

Scans entire websites in the results for a more comprehensive search. Ideally, this search type should be able to find "needle in the haystack" information hidden somewhere in the website text. Hence, choose this option if you want to trade a more resource intensive search process for generally more relevant search results.
For the best possible search results, also enable semantic chunking and use SPLADE as the keyword retriever.

Keyword retrievers

Okapi BM25

This extension comes out of the box with Okapi BM25 enabled, which is widely used and very popuplar for keyword based document retrieval. It runs on the CPU and, for the purpose of this extension, it is fast.

SPLADE

If you don't run the extension in "CPU only" mode and have some VRAM to spare, you can also select SPLADE in the "Advanced settings" section as an alternative. It has been shown to outperform BM25 in multiple benchmarks and uses a technique called "query expansion" to add additional contextually relevant words to the original query. However, it is slower than BM25. You can read more about it here.

To improve performance, documents are embedded in batches and in parallel. Increasing the "SPLADE batch size" parameter setting improves performance up to a certain point, but VRAM usage ramps up quickly with increasing batch size. A batch size of 8 appears to be a good trade-off, but the default value is 2 to avoid running out of memory on smaller GPUs.

Chunking Methods

Character-based Chunking

Naively partitions a website's text into fixed sized chunks without any regard for the text content. This is the default, since it is fast and requires no GPU.

Semantic Chunking

Tries to partition a website's text into chunks based on semantics. If two consecutive sentences have very different embeddings (based on the cosine distance between their embeddings), a new chunk will be started. How different two consecutive sentences have to be for them to end up in different chunks can be tuned using the sentence split threshold parameter in the UI.
For natural language, this method generally produces much better results than character-based chunking. However, it is noticable slower, even when using the GPU.

Recommended models

If you (like me) have ≤ 12 GB VRAM, I recommend using Llama-3.1-8B-instruct, gemma-2-9b-it or Mistral-Nemo-Instruct-2407.
For Llama-3.1 and Gemma-2 I recommend using the "Divine Intellect" generation parameter preset and the "copilot_prompt" custom system message. Mistral Nemo requires lower temperatures, so I recommend creating a new generation parameter preset based on "Divine Intellect", but with a lower temperature of 0.3.

About

An extension for oobabooga/text-generation-webui that enables the LLM to search the web using DuckDuckGo

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •