Lists (1)
Sort Name ascending (A-Z)
Stars
Python for Algorithmic Trading Cookbook, published by Packt
A fast, clean, responsive Hugo theme.
Machine learning metrics for distributed, scalable PyTorch applications.
A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop which is flexible enough to handle the majority of use cases…
Official implementation for "iTransformer: Inverted Transformers Are Effective for Time Series Forecasting" (ICLR 2024 Spotlight), https://openreview.net/forum?id=JePfAI8fah
A privacy-first, open-source platform for knowledge management and collaboration. Download link: http://github.com/logseq/logseq/releases. roadmap: http://trello.com/b/8txSM12G/roadmap
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
The official source code repository for the calibre ebook manager
Split a single, monolithic mp3 audiobook file into chapters using Machine Learning and ffmpeg.
An open-source, lightweight note-taking solution. The pain-less way to create your meaningful notes. Your Notes, Your Way.
Benchmarks for classification of genomic sequences
GENA-LM is a transformer masked language model trained on human DNA sequence.
ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).
Neural Networks and the Chomsky Hierarchy
An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"
Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).
Using sparse coding to find distributed representations used by neural networks.
Measuring the situational awareness of language models
An interactive exploration of Transformer programming.
A library for mechanistic interpretability of GPT-style language models
Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, B…
Training Sparse Autoencoders on Language Models
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilizatio…