Skip to content

Latest commit

 

History

History
57 lines (31 loc) · 2.31 KB

ai.md

File metadata and controls

57 lines (31 loc) · 2.31 KB

Adding you own self-hosted AI bot to Social Stream Ninja

In this guide we will do a VERY basic setup of the Llama3 LLM, using Ollama, on Windows. It should work well on a variety of systems, including modern Nvidia GPUs and newer MacOS systems.

Installing Ollama

https://ollama.com

image

Installing an LLM model

There's many choices available; go to https://ollama.com/library for a list of options.

Social Stream Ninja targets Llama3.1 by default, but you can specify the model to use the Social Stream Ninja menu. For now though, let's just use llama3.1

To install the model, lets open Command Prompt (or Terminal) > ollama pull llama3.1

image

If you need to remove it, you can run ollama rm llama3.

image

It will be available for API access by default at http://localhost:11434, however, there are still issues with CORS we need to deal with if using the Chrome extension.

For windows, you can try close Ollama.exe from the taskbar, and then run the following:

ollama serve stop
taskkill /F /IM ollama.exe
set OLLAMA_ORIGINS=chrome-extension://*
ollama serve

To make this CORS permission permanant on Windows, you need to add OLLAMA_ORIGINS=chrome-extension://* to the Windows user enviromental system variables. Then start/restart ollma; ollama serve.

image

This allows us to access Ollama from our Social Stream Ninja extension. If you want to use it via the dock.html page, with custom.js commands, you may need to host Ollama behind a reverse proxy service. Refer to their documentation for info on this.

Using

Just make sure the toggle is on, and that you have Ollama /w Llama3 installed/running locally, and you should be good to go now.

image

The Bot will respond automatically to chat if it thinks it's a good idea. There is a 5-second timeout per source site.

image

  • Steve ps. BLARGH!