Zyphra / Zyda_processingLinks
☆41Updated last year
Alternatives and similar repositories for Zyda_processing
Users that are interested in Zyda_processing are comparing it to the libraries listed below
Sorting:
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- A repository for research on medium sized language models.☆77Updated last year
- ☆48Updated last year
- ☆56Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆81Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆122Updated 11 months ago
- Code repository for the c-BTM paper☆108Updated 2 years ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆23Updated last year
- ☆50Updated last year
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- ☆91Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated 2 years ago
- Official repository for "BLEUBERI: BLEU is a surprisingly effective reward for instruction following"☆31Updated 8 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Updated last year
- ☆53Updated 2 years ago
- EvaByte: Efficient Byte-level Language Models at Scale☆115Updated 9 months ago
- DPO, but faster 🚀☆47Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆67Updated last year
- ☆64Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- Replicating O1 inference-time scaling laws☆92Updated last year