Stars
Training Sparse Autoencoders on Language Models
nannyml: post-deployment data science in python
Everything about the SmolLM & SmolLM2 family of models
A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
Python package implementing transformers for pre processing steps for machine learning.
Extra blocks for scikit-learn pipelines.
A Python module to perform exploratory & confirmatory factor analyses.
RAG that intelligently adapts to your use case, data, and queries
Create sparse and accurate risk scoring systems!
A Python package with explanation methods for extraction of feature interactions from predictive models
A library for mechanistic interpretability of GPT-style language models
Faker is a Python package that generates fake data for you.
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
Access large language models from the command-line
DSPy: The framework for programming—not prompting—language models
Build and query dynamic, temporally-aware Knowledge Graphs
🌳 React component to create interactive D3 tree graphs
Model interpretability and understanding for PyTorch
Helps you build better AI agents through debuggable unit testing
Explore and interpret large embeddings in your browser with interactive visualization! 📍