This project implements a method to display responses from AI chatbots in real-time using LangChain and Large Language Models (LLM). By utilizing Docker, the project simplifies environment setup and ensures consistent execution results across different environments.
-
Create a
.env
file in the root directory. -
Set your OpenAI API key to
OPENAI_API_KEY
.OPENAI_API_KEY=your_openai_api_key_here
Build the Docker image. Please execute this command in the project's root directory.
docker build -t langchain-streaming-chain-test .
Next, mount the current directory on the host machine (local environment) to /usr/src/app
in the container, and start the container. This allows changes made locally to be reflected inside the container.
docker run --env-file .env -v $(pwd):/usr/src/app -it langchain-streaming-chain-test
Once the bash shell in the container is up, run the script with the following command.
python main.py