Vision integrated to LLM 🌌🤖 Introducing HAL4500: Bridging the Gap Between Sci-Fi and Reality! 🚀
Greetings, fellow space enthusiasts and tech aficionados! 🚀✨ Remember HAL9000 from the iconic "2001: A Space Odyssey"? HAL9000's legacy lives on as we embark on a journey to bring the future closer to the present with HAL4500! 🤖🌠
Imagine a world where machines understand us, collaborate with us, and assist us in real-time. Well, HAL4500 is here to take us one step closer to that vision. 🌐🔮
🔍 Object Detection Magic: Our journey starts with YOLOv8, a state-of-the-art object detection model trained on the extensive MS COCO dataset. HAL4500, like a digital detective, can effortlessly detect and recognize objects held in your hands. 📦🔍
🧠 LangChain Logic: But HAL4500 doesn't stop there. It's powered by LangChain, a dynamic autonomous agent capable of logic-based decision-making. HAL4500 can understand your voice commands, engage in conversations, and decide when to deploy its digital tools. It's like having a knowledgeable companion at your fingertips. 💬🤯
Create a .env
file in the root and add:
HEARING_PORT = ****
OPEN_AI_API = ********************************
- Install the dependencies usng
environment.yaml
orenvironments.txt
- Run the
vision.py
script - Run the
hearing.py
script - Run the
main.py
script
hal_demonstration.mp4
- Aditya Agarwal (@adi611)
- Akash Parua (@AkashParua)