-
Staff SWE @ Lightning AI
- New York, United States
-
02:01
(UTC -05:00) - in/dmitsf
Highlights
- Pro
Stars
Lightning-fast serving engine for any AI model of any size. Flexible. Easy. Enterprise-scale.
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
🔥 Streamline your web application's authentication with Jackson, an SSO service supporting SAML and OpenID Connect protocols. Beyond enterprise-grade Single Sign-On, it also supports Directory Sync…
Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
Transform datasets at scale. Optimize datasets for fast AI model training.
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
Stockfish NNUE (Chess evaluation) trainer in Pytorch
Machine learning metrics for distributed, scalable PyTorch applications.
Accelerated pose estimation and tracking using semi-supervised convolutional networks.
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Prometheus Operator creates/configures/manages Prometheus clusters atop Kubernetes