Lists (1)
Sort Name ascending (A-Z)
Starred repositories
Proxmox VE Helper-Scripts (Community Edition)
An open-source, lightweight note-taking solution. The pain-less way to create your meaningful notes. Your Notes, Your Way.
A self-hostable bookmark-everything app (links, notes and images) with AI-based automatic tagging and full text search
Lightweight server monitoring hub with historical data, docker stats, and alerts.
A FREE pragmatic DevOps roadmap to kickstart your DevOps career in the Cloud Native era following the Agile MVP style! ⭐ (2025 plans for DevOps, Cloud, Platform, SRE, SWE)
OCR, layout analysis, reading order, table recognition in 90+ languages
🔍 Better text detection by combining multiple OCR engines (EasyOCR, Tesseract, and Pororo) with 🧠 LLM.
OpenVPN road warrior installer for Ubuntu, Debian, AlmaLinux, Rocky Linux, CentOS and Fedora
Your favorite operating systems in one place. A network-based bootable operating system installer based on iPXE.
😵 GitHub achievements that did not make the cut.
a tool for backing up your data using rsync (if you want to get help, use https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss)
An open-source & self-hostable Heroku / Netlify / Vercel alternative.
Chris Titus Tech's Windows Utility - Install Programs, Tweaks, Fixes, and Updates
An open-source remote desktop application designed for self-hosting, as an alternative to TeamViewer.
Making large AI models cheaper, faster and more accessible
Official Transmission BitTorrent client repository
yq is a portable command-line YAML, JSON, XML, CSV, TOML and properties processor
An evolving how-to guide for securing a Linux server.
Add website scraping abilities to Datasette
A collection of familiar, friendly, and modern emoji from Microsoft
Papers from the computer science community to read and discuss.
Download an entire website from the Wayback Machine.
Simple PDF generation for Python (FPDF PHP port)
The simplest, fastest repository for training/finetuning medium-sized GPTs.
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading