Highlights
- Pro
Stars
Merlin and the Auditors of the Round Table.
Source code for our paper "Fair Decentralized Learning" in SaTML 2025.
Releasing the spot availability traces used in "Can't Be Late" paper.
prime is a framework for efficient, globally distributed training of AI models over the internet.
umd-huang-lab / SWIFT
Forked from marcobornstein/SWIFTSWIFT: Shared WaIt Free Transmission
A beautiful, simple, clean, and responsive Jekyll theme for academics
Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training
Source code to support the paper "Noiseless Privacy Preserving Decentralized Learning" at PoPETS 2025.
A simulator for Decentralized Learning algorithms, based on discrete-event simulation
Comprehensive and timely academic information on federated learning (papers, frameworks, datasets, tutorials, workshops)
Curated collection of papers in machine learning systems
LORA (Low-Rank Adaptation) to showcase an efficient way to fine-tune deep learning models.
Fast and memory-efficient exact attention
lemonjesus / qemu-ipod-nano
Forked from devos50/qemu-iosAn attempt at rehosting iPod Nano 3G (and possibly others) in QEMU. Originally a fork of Devos50's QEMU fork which had work for the iPod Touch 1G which I used as a starting point.
devos50 / qemu-ios
Forked from qemu/qemuA QEMU emulator for legacy Apple devices
This open source benchmarking framework allows you to build your own P2P learning algorithm and evaluate it in a simulated but realistic -- where you can model message delay, drop or churn -- netwo…
A decentralized learning research framework
Stochastic Gradient Push for Distributed Deep Learning
FedScale is a scalable and extensible open-source federated learning (FL) platform.
Decentralized Systems Benchmarking and Experiment Runner Framework
S5L8702 FMISS Peripheral Bytecode Tools and Documentation
A PyTorch implementation of GraphSAGE. This package contains a PyTorch implementation of GraphSAGE.
A sybil-resilient distributed learning protocol.