Skip to content
forked from ollama/ollama

Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.

License

Notifications You must be signed in to change notification settings

melroy89/ollama

Repository files navigation

logo

Ollama

Create, run, and share self-contained large language models (LLMs). Ollama bundles a model’s weights, configuration, prompts, and more into self-contained packages that run anywhere.

Note: Ollama is in early preview. Please report any issues you find.

Download

  • Download for macOS on Apple Silicon (Intel coming soon)
  • Download for Windows and Linux (coming soon)
  • Build from source

Examples

Quickstart

ollama run llama2
>>> hi
Hello! How can I help you today?

Creating a custom model

Create a Modelfile:

FROM llama2
PROMPT """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.

User: {{ .Prompt }}
Mario:
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

Model library

Ollama includes a library of open-source, pre-trained models. More models are coming soon.

Model Parameters Size Download
Llama2 7B 3.8GB ollama pull llama2
Orca Mini 3B 1.9GB ollama pull orca
Vicuna 7B 3.8GB ollama pull vicuna
Nous-Hermes 13B 7.3GB ollama pull nous-hermes

Building

go build .

To run it start the server:

./ollama server &

Finally, run a model!

./ollama run llama2

About

Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.

Resources

License

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 89.0%
  • CMake 3.3%
  • C 3.2%
  • Shell 1.4%
  • TypeScript 1.2%
  • PowerShell 0.7%
  • Other 1.2%