- Varanasi, Uttar Pradesh, India
Stars
A collection of 3D reconstruction papers in the deep learning era.
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)
Train transformer language models with reinforcement learning.
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
A DeepLab V3+ Model with choice of Encoder for Binary Segmentation. Implemented with Tensorflow.
PyTorch implementation of DeepLabV3, trained on the Cityscapes dataset.
PyTorch implementation of DeepLabv3
Implementation of the DeepLabV3+ model in PyTorch for semantic segmentation, trained on DeepFashion2 dataset
Must-read Papers on Physics-Informed Neural Networks.
Collection of AWESOME vision-language models for vision tasks
Famous Vision Language Models and Their Architectures
Tutorial on normalizing flows.
Multi-Agent Reinforcement Learning (MARL) papers
Code for Machine Learning for Algorithmic Trading, 2nd edition.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Try out deep learning models online on Google Colab
Transformer Explained Visually: Learn How LLM Transformer Models Work with Interactive Visualization
Amazon ML Challenge is a two stage competition where students from all engineering campuses across India will get a unique opportunity to work on Amazon’s dataset to bring in fresh ideas and build …
It is a solution to amazon ml engineer hiring challenge 2020
Chatbot for documentation, that allows you to chat with your data. Privately deployable, provides AI knowledge sharing and integrates knowledge into your AI workflow
jarvis008 / PyTorch-VAE
Forked from AntixK/PyTorch-VAEA Collection of Variational Autoencoders (VAE) in PyTorch.
Open-Sora: Democratizing Efficient Video Production for All
Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
Accessible large language models via k-bit quantization for PyTorch.
QLoRA: Efficient Finetuning of Quantized LLMs