alinlab / HOMERLinks
Official implementation of Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs (ICLR 2024).
โ43Updated last year
Alternatives and similar repositories for HOMER
Users that are interested in HOMER are comparing it to the libraries listed below
Sorting:
- [๐๐๐๐๐ ๐ ๐ข๐ง๐๐ข๐ง๐ ๐ฌ ๐๐๐๐ & ๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐ซ๐๐ฅ] ๐๐ฏ๐ฉ๐ข๐ฏ๐ค๐ช๐ฏ๐จ ๐๐ข๐ต๐ฉ๐ฆ๐ฎ๐ข๐ต๐ช๐ค๐ข๐ญ ๐๐ฆ๐ข๐ด๐ฐ๐ฏ๐ช๐ฏโฆโ51Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)โ61Updated last year
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"โ83Updated last year
- [Technical Report] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with โฆโ60Updated 10 months ago
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)โ65Updated last year
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception oโฆโ24Updated last month
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messagesโ49Updated 8 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."โ42Updated 9 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"โ38Updated last year
- PASTA: Post-hoc Attention Steering for LLMsโ122Updated 8 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Promptingโ32Updated last year
- [NeurIPS-2024] ๐ Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623โ86Updated 10 months ago
- Long Context Extension and Generalization in LLMsโ58Updated 10 months ago
- Codebase for Instruction Following without Instruction Tuningโ35Updated 10 months ago
- โ185Updated last year
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073โ29Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"โ86Updated last year
- Model Stock: All we need is just a few fine-tuned modelsโ119Updated 10 months ago
- Large Language Models Can Self-Improve in Long-context Reasoningโ72Updated 8 months ago
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)โ89Updated 2 years ago
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"โ95Updated last year
- โ12Updated last year
- official repo for the paper "Learning From Mistakes Makes LLM Better Reasoner"โ59Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Modelsโ116Updated last year
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuningโ31Updated 2 years ago
- Preference Learning for LLaVAโ47Updated 9 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"โ48Updated last year
- RM-R1: Unleashing the Reasoning Potential of Reward Modelsโ120Updated last month
- Self-Alignment with Principle-Following Reward Modelsโ163Updated 3 months ago
- [ACL 2024 Findings & ICLR 2024 WS] An Evaluator VLM that is open-source, offers reproducible evaluation, and inexpensive to use. Specificโฆโ74Updated 10 months ago