Stars
Template machine learning project using wandb, hydra-zen and submitit on Slurm with Apptainer
MiniMol is a 10M-parameters molecular fingerprinting model pre-trained on >3300 biological and quantum tasks
[NeurIPS 2023 AI4Science] "A Transformer Model for Symbolic Regression towards Scientific Discovery"
code for the paper "DiGress: Discrete Denoising diffusion for graph generation"
A concise but complete full-attention transformer with a set of promising experimental features from various papers
arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
[NeurIPS'22] Tokenized Graph Transformer (TokenGT), in PyTorch
A collection of AWESOME things about Graph-Related LLMs.
NAGphormer: A Tokenized Graph Transformer for Node Classification in Large Graphs
Graphium: Scaling molecular GNNs to infinity.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
Edge representation learning library
List of resources for learning Category Theory
List of resources coming out of Normconf Slack
An implementation of local windowed attention for language modeling
Source code for From Stars to Subgraphs (ICLR 2022)