kyegomez / LM-InfiniteLinks
Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"
☆40Updated last year
Alternatives and similar repositories for LM-Infinite
Users that are interested in LM-Infinite are comparing it to the libraries listed below
Sorting:
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆57Updated last week
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- A repository for research on medium sized language models.☆78Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆81Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 2 months ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- ☆39Updated last year
- This is the official repository for Inheritune.☆115Updated 9 months ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆29Updated this week
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆35Updated 2 years ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆26Updated 9 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- ☆17Updated 7 months ago
- ☆52Updated last year
- ☆48Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- LMTuner: Make the LLM Better for Everyone☆37Updated 2 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated 2 years ago
- ☆65Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆44Updated last year
- 🚢 Data Toolkit for Sailor Language Models☆94Updated 9 months ago
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated 2 years ago
- ☆55Updated last year
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆27Updated 11 months ago