-
SAIT @SAITPublic
- Seoul
- http://sudormrf.run/junhocho
Highlights
- Pro
Lists (1)
Sort Name ascending (A-Z)
Stars
Auto configurations for Language Server for vim-lsp
async language server protocol plugin for vim and neovim
Sync notes between local and cloud with smart conflict: S3 (Amazon S3/Cloudflare R2/Backblaze B2/...), Dropbox, webdav (NextCloud/InfiniCLOUD/Synology/...), OneDrive, Google Drive (GDrive), Box, pC…
Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)
A curated list of awesome knowledge-driven autonomous driving (continually updated)
A curated list of papers in Test-time Adaptation, Test-time Training and Source-free Domain Adaptation
Get up and running with Llama 3.3, Mistral, Gemma 2, and other large language models.
AI companions with memory: a lightweight stack to create and host your own AI companions
Official PyTorch implementation of MaskSub "Masking Augmentation for Supervised Learning"
You Only Look Once for Panopitic Driving Perception.(MIR2022)
HybridNets: End-to-End Perception Network
Pyzotero: a Python client for the Zotero API
Collection of awesome test-time (domain/batch/instance) adaptation methods
High accuracy RAG for answering questions from scientific documents with citations
📈 Adaptive: parallel active learning of mathematical functions
[CVPR 2022 Oral] Towards Fewer Annotations: Active Learning via Region Impurity and Prediction Uncertainty for Domain Adaptive Semantic Segmentation https://arxiv.org/abs/2111.12940
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
A playbook for systematically maximizing the performance of deep learning models.
PyTorch implementation of multi-task learning architectures, incl. MTI-Net (ECCV2020).
[NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue Liang*, Zhiwen Fan*, Rishov Sarkar, Ziyu Jiang, Tianlong Che…
Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.