-
LG CNS
- Seoul, South Korea
-
07:09
(UTC +09:00) - https://liam.kim
- @sky0bserver
- in/jongsu-kim-63458347
Highlights
- Pro
Lists (5)
Sort Name ascending (A-Z)
Stars
- All languages
- Assembly
- C
- C#
- C++
- CSS
- Cuda
- Dockerfile
- EJS
- Fortran
- Gherkin
- Go
- HTML
- Handlebars
- Haskell
- Java
- JavaScript
- Jinja
- Julia
- Jupyter Notebook
- Kotlin
- LLVM
- Lua
- MATLAB
- MDX
- Makefile
- Objective-C
- PHP
- Perl
- PowerShell
- Pug
- Python
- R
- Rich Text Format
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Singularity
- Swift
- TeX
- TypeScript
- Vim Script
- Zig
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
This repository contains the Hugging Face Agents Course.
An LLM-powered chatbot for fediverse. A tech demo for BotKit.
Knowledge-based, Content-based and Collaborative Recommender systems are built on MovieLens dataset with 100,000 movie ratings. These Recommender systems were built using Pandas operations and by f…
RSTutorials: A Curated List of Must-read Papers on Recommender System.
An LLM-powered CLI tool for summarizing web pages
A full-featured, hackable Next.js AI chatbot built by Vercel
The calflops is designed to calculate FLOPs、MACs and Parameters in all various neural networks, such as Linear、 CNN、 RNN、 GCN、Transformer(Bert、LlaMA etc Large Language Model)
The translation of cli-guidelines/cli-guidelines (a.k.a. clig.dev)
This is a replicate of DeepSeek-R1-Zero and DeepSeek-R1 training on small models with limited data
A Comparative Framework for Multimodal Recommender Systems
Clean, minimal, accessible reproduction of DeepSeek R1-Zero
Fast Python Collaborative Filtering for Implicit Feedback Datasets
A 120-day CUDA learning plan covering daily concepts, exercises, pitfalls, and references (including “Programming Massively Parallel Processors”). Features six capstone projects to solidify GPU par…
🚀 Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Minimalistic 4D-parallelism distributed training framework for education purpose
(WIP) A small but powerful, homemade PyTorch from scratch.
Helpful tools and examples for working with flex-attention
PyTorch per step fault tolerance (actively under development)
This repository shows how to use Q8 kernels with `diffusers` to optimize inference of LTX-Video on ADA GPUs.