This project is a Context-Based Question Answering Tool that leverages the T5 (Text-To-Text Transfer Transformer) model finetuned on the SQuAD (Stanford Question Answering Dataset) dataset. The tool is designed to accept a context passage and a question as input and provide a relevant answer based on the given context.
-
Contextual Understanding: The model has been trained to understand and extract relevant information from the provided context to answer user questions.
-
Fine-tuned on SQuAD Dataset: The T5 model has been finetuned specifically on the SQuAD dataset, a widely used benchmark for question answering tasks.
-
Easy-to-Use Interface: The tool provides a simple interface for users to input context passages and questions, receiving accurate answers in return.
The model has been fine-tuned on the SQuAD dataset using the Hugging Face Transformers library. For details on the fine-tuning process, please refer to the fine_tuningllm.ipynb notebook in the directory.