LLM model which answers questions about Animal (2023)
- The platform is now live the instance needs to be deployed if the service is unavailable
- The model is hosted on hugging face link
- Current Prediction pipeline is just text generation directly from the model for any user query (alpha version)
- text processing layer which will handle illegitimate and out of context queries
- Embeddings model for model inference along with prompt Engineering for text generation to better understand user queries (Beta version)
- Add user queries logging in the production app
- Clean datasets from the better contextual understanding of the information (manual work)
- Try out Q/A models like BERT and other GPT models like InstructGPT
- Create Benchmarking for model performance comparison