Lists (2)
Sort Name ascending (A-Z)
Stars
GraphRAG on Neo4j by finetuning GNN+LLM
Best Practices on Recommendation Systems
[arXiv 2024] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".
A modular graph-based Retrieval-Augmented Generation (RAG) system
[NeurIPS 2024] Official implementation for paper "Can Graph Learning Improve Planning in LLM-based Agents?"
Collecting awesome papers of RAG for AIGC. We propose a taxonomy of RAG foundations, enhancements, and applications in paper "Retrieval-Augmented Generation for AI-Generated Content: A Survey".
Accompanied repositories for our paper Graph foundation model
Awesome-LLM-RAG: a curated list of advanced retrieval augmented generation (RAG) in Large Language Models
A foundation model for knowledge graph reasoning
Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"
[WSDM'2024 Oral] "LLMRec: Large Language Models with Graph Augmentation for Recommendation"
[ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"
List of papers on ICLR 2024
Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey
A framework to empover LLMs on graph reasoning and generation. Refer to our paper: https://arxiv.org/pdf/2402.08785.pdf
A Unified Python Library for Graph Prompting
A curated collection of research papers exploring the utilization of LLMs for graph-related tasks.
[EMNLP 2021] Dataset and PyTorch Code for ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning
PubMedQA: A Dataset for Biomedical Research Question Answering
Multimodal Graph Learning: how to encode multiple multimodal neighbors with their relations into LLMs
GraphLLM: Boosting Graph Reasoning Ability of Large Language Model
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We also show you how to solve end to end problems using Llama mode…
Collection of AWESOME vision-language models for vision tasks