Text classification with LLMs via Ollama.
The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop, or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically.
First, make sure you have Ollama installed on your laptop. Then, use Ollama to pull the mistral large language model.
ollama pull mistral-nemo
Finally, run the Spring Boot application.
./gradlew bootRun
The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a mistral model at startup time.
./gradlew bootTestRun
You can now call the application that will use Ollama and mistral to classify your text. This example uses httpie to send HTTP requests.
Each endpoint is backed by a progressively better prompt to increase the quality of the text classification task by the LLM.
Class Names:
http --raw "Basketball fans can now watch the game on the brand-new NBA app for Apple Vision Pro." :8080/classify/class-names
Class Descriptions:
http --raw "Basketball fans can now watch the game on the brand-new NBA app for Apple Vision Pro." :8080/classify/class-descriptions
Few Shots Prompt:
http --raw "Basketball fans can now watch the game on the brand-new NBA app for Apple Vision Pro." :8080/classify/few-shots-prompt
Few Shots History:
http --raw "Basketball fans can now watch the game on the brand-new NBA app for Apple Vision Pro." :8080/classify/few-shots-history
Structured Output:
http --raw "Basketball fans can now watch the game on the brand-new NBA app for Apple Vision Pro." :8080/classify/structured-output