-
Robotics, EPFL
- Lausanne, Switzerland
- weijiang-xiong.github.io
Highlights
- Pro
Lists (1)
Sort Name ascending (A-Z)
Stars
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
This repo contains the Hugging Face Deep Reinforcement Learning Course.
Code for "Multi-Time Attention Networks for Irregularly Sampled Time Series", ICLR 2021.
An implementation of local windowed attention for language modeling
Pytorch reproduction of the paper "Gaussian Mixture Model Convolutional Networks" (CVPR 17)
[KDD'2024] "UrbanGPT: Spatio-Temporal Large Language Models"
Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models.
PaSa -- an advanced paper search agent powered by large language models. It can autonomously make a series of decisions, including invoking search tools, reading papers, and selecting relevant refe…
The official code for "One Fits All: Power General Time Series Analysis by Pretrained LM (NeurIPS 2023 Spotlight)"
Add customized infinite scrolling to websites and auto load the next page.
Code for our SIGKDD'22 paper Pre-training-Enhanced Spatial-Temporal Graph Neural Network For Multivariate Time Series Forecasting.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
"LightRAG: Simple and Fast Retrieval-Augmented Generation"
FastF1 is a python package for accessing and analyzing Formula 1 results, schedules, timing data and telemetry
uBlock Origin - An efficient blocker for Chromium and Firefox. Fast and lean.
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
ASCII generator (image to text, image to image, video to video)
🇨🇳 Chinese sticker pack,More joy / 表情包的博物馆, Github最有毒的仓库, 中国表情包大集合, 聚欢乐~
A PyTorch implementation of the Transformer model in "Attention is All You Need".
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Utilities intended for use with Llama models.