kyegomez / LM-Infinite
Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"
☆42Updated 4 months ago
Alternatives and similar repositories for LM-Infinite:
Users that are interested in LM-Infinite are comparing it to the libraries listed below
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆75Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆36Updated 2 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated last month
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆22Updated last month
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- ☆31Updated 8 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆44Updated 3 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- Code and data for "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation" (EMNLP 2023)☆63Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- Codebase for Instruction Following without Instruction Tuning☆33Updated 5 months ago
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆51Updated 7 months ago
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆29Updated 11 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆44Updated 7 months ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆29Updated last month
- This is the official repository for Inheritune.☆109Updated last month
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆53Updated 10 months ago
- ☆67Updated last year
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆33Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated 11 months ago
- ☆17Updated 10 months ago
- Code repository for the c-BTM paper☆106Updated last year
- A repository for research on medium sized language models.☆77Updated 9 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year