-
KyungHee University
- South Korea
-
09:42
(UTC +09:00)
Lists (1)
Sort Newest
Stars
toLLMatch🔪: Context-aware LLM-based simultaneous translation
A list of large language models for user modeling (LLM-UM) papers, based on "User Modeling in the Era of Large Language Models: Current Research and Future Directions" at DEBULL
Large Language Model-enhanced Recommender System Papers
Official Repo of paper "KnowCoder: Coding Structured Knowledge into LLMs for Universal Information Extraction". In the paper, we propose KnowCoder, the most powerful large language model so far for…
MLNLP: This repository is a collection of AI top conferences papers (e.g. ACL, EMNLP, NAACL, COLING, AAAI, IJCAI, ICLR, NeurIPS, and ICML) with open resource code
Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI
Korean Sentence Embedding Repository
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
[ACL 2024] IEPile: A Large-Scale Information Extraction Corpus
A collection of benchmarks and datasets for evaluating LLM.
Reading the data from OPIEC - an Open Information Extraction corpus
Code repo for EMNLP21 paper "Zero-Shot Information Extraction as a Unified Text-to-Triple Translation"
Code repo for ACL22 paper "DeepStruct: Pretraining of Language Models for Structure Prediction"
[ESWC '24] This repo is official implementation for the paper "Towards Harnessing Large Language Models as Autonomous Agents for Semantic Triple Extraction from Unstructured Text"
Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE benchmark with subquadratic complexity in length (or without…
A Python module for getting the GPU status from NVIDA GPUs using nvidia-smi programmically in Python
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting…
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.