Stars
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
Python library for YOLO small object detection and instance segmentation
[CVPR 2024] Official RT-DETR (RTDETR paddle pytorch), Real-Time DEtection TRansformer, DETRs Beat YOLOs on Real-time Object Detection. 🔥 🔥 🔥
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
CoreNet: A library for training deep neural networks
Official repository of Evolutionary Optimization of Model Merging Recipes
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Official codebase for I-JEPA, the Image-based Joint-Embedding Predictive Architecture. First outlined in the CVPR paper, "Self-supervised learning from images with a joint-embedding predictive arch…
Images to inference with no labeling (use foundation models to train supervised models).
"Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics" by Wuyang Chen, Xinyu Gong, Yunchao Wei, Humphrey Shi, Zhicheng Yan, Yi Yang, and Zhangyan…
Code for Towards Less Constrained Macro-Neural Architecture Search
This repo includes ChatGPT prompt curation to use ChatGPT and other LLM tools better.
The first collection of surrogate benchmarks for Joint Architecture and Hyperparameter Search.
Python-based research interface for blackbox and hyperparameter optimization, based on the internal Google Vizier Service.
NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021
[ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark
A playbook for systematically maximizing the performance of deep learning models.
Diagrams for visualizing neural network architecture (Created with diagrams.net)
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
SparseMask: Differentiable Connectivity Learning for Dense Image Prediction.
Seamless analysis of your PyTorch models (RAM usage, FLOPs, MACs, receptive field, etc.)
[ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
Convert Machine Learning Code Between Frameworks