- Kyiv
Highlights
- Pro
Stars
Official repository for our work on micro-budget training of large-scale diffusion models.
Materials for the Ultimate Hybrid Search Workshop
High accuracy RAG for answering questions from scientific documents with citations
Examples and guides for using the Gemini API
A massively parallel, high-level programming language
Implementation of random Fourier features for kernel method, like support vector machine and Gaussian process model
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
Graph-structured Indices for Scalable, Fast, Fresh and Filtered Approximate Nearest Neighbor Search
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
How to stream GPT-4, ChatGPT & GPT-3.5 model responses (gpt-4, gpt-3.5-turbo & text-davinci-003)?
Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.
ChatData 🔍 📖 brings RAG to real applications with FREE✨ knowledge bases. Now enjoy your chat with 6 million wikipedia pages and 2 million arxiv papers.
Tigramite is a python package for causal inference with a focus on time series data. The Tigramite documentation is at
KErnel OPerationS, on CPUs and GPUs, with autodiff and without memory overflows
Visualization and debugging tool for LangChain workflows
A curated list of causal inference libraries, resources, and applications.
An extension of XGBoost to probabilistic modelling
the AI-native open-source embedding database
Causal Discovery in Python. It also includes (conditional) independence tests and score functions.
A library of extension and helper modules for Python's data analysis and machine learning libraries.
A guidance language for controlling large language models.
catch22: CAnonical Time-series CHaracteristics
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Get 100% uptime, reliability from OpenAI. Handle Rate Limit, Timeout, API, Keys Errors