- Redmond, WA
-
02:45
(UTC -08:00)
Stars
- All languages
- ASP
- Arduino
- Assembly
- Astro
- Awk
- Batchfile
- Bicep
- BitBake
- Bro
- C
- C#
- C++
- CMake
- CSS
- Clojure
- CoffeeScript
- Cuda
- Cython
- Dafny
- Dockerfile
- Elixir
- Erlang
- F#
- F*
- FreeMarker
- Go
- Groovy
- HCL
- HTML
- Hack
- Haskell
- Inno Setup
- Isabelle
- Java
- JavaScript
- Jsonnet
- Julia
- Jupyter Notebook
- Kotlin
- LLVM
- MLIR
- Makefile
- Markdown
- Mustache
- NSIS
- Nextflow
- OCaml
- Objective-C
- Objective-C++
- Objective-J
- OpenSCAD
- PHP
- Perl
- Perl 6
- Pony
- PowerShell
- Protocol Buffer
- Python
- R
- RPM Spec
- Rich Text Format
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Scheme
- Shell
- Starlark
- Swift
- TeX
- Thrift
- TypeScript
- TypeSpec
- V
- Vim Script
- WebAssembly
- XSLT
High performance self-hosted photo and video management solution.
Markdown based blazing fast blog creator
c++ library for binary fuse filters, including a sharded filter
Cross-platform C++11 header-only library for memory mapped file IO
MarS: a Financial Market Simulation Engine Powered by Generative Foundation Model
bluesky-social / jetstream
Forked from ericvolp12/jetstreamA simplified JSON event stream for AT Proto
Example code for a simple TCP client / server app written in C++
An example of using ctest for a C++ app / library.
Huly — All-in-One Project Management Platform (alternative to Linear, Jira, Slack, Notion, Motion)
Build highly concurrent, distributed, and resilient message-driven applications using Java/Scala
A shell parser, formatter, and interpreter with bash support; includes shfmt
Backstage is an open framework for building developer portals
RocksDB/LevelDB inspired key-value database in Go
Self-contained distributed software platform for building stateful, massively real-time streaming applications in Rust.
Full stack application platform for building stateful microservices, streaming APIs, and real-time UIs
Traces the shared-object dependencies of a binary, and graphs them.
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
Shared Middle-Layer for Triton Compilation
🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transf…
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
Harden-Runner secures CI/CD workflows by controlling network access and monitoring activities on GitHub-hosted and self-hosted runners