-
Aperto / Genotic
- Warsaw, Poland
- ntoxeg.github.io
- @ntoxeg.bsky.social
Highlights
- Pro
🧰 my stack
Rational OpenCog Controlled Agent (ROCCA). Use OpenCog to control a rational agent in OpenAI Gym and Malmo environments.
Makes Julia reason with equations. General purpose metaprogramming, symbolic computation and algebraic equational reasoning library for the Julia programming language: E-Graphs & equality saturatio…
Opinionated library for managing hyperparameters and mutable state of machine learning training systems.
Hydra is a framework for elegantly configuring complex applications
Simple tool to split COCO annotations into train/test datasets.
Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and…
WebGym: Web-browser-based tasks for RL Agents
NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
A modular RL library to fine-tune language models to human preferences
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Automated submission workflows in the cloud.
The perfect sidekick to your scientific inquiries
Simple and easily configurable grid world environments for reinforcement learning
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
Used for adaptive human in the loop evaluation of language and embedding models.
GPU & Accelerator process monitoring for AMD, Apple, Huawei, Intel, NVIDIA and Qualcomm
Making large AI models cheaper, faster and more accessible
💫 Beautiful spinners for terminal, IPython and Jupyter
Estimates the size of a PyTorch model in memory
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Serve, optimize and scale PyTorch models in production
1 Line of code data quality profiling & exploratory data analysis for Pandas and Spark DataFrames.
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
An extremely fast Python linter and code formatter, written in Rust.
Benchmarking the Spectrum of Agent Capabilities
A guidance language for controlling large language models.