mistralai / mistral-evalsLinks
☆73Updated 2 months ago
Alternatives and similar repositories for mistral-evals
Users that are interested in mistral-evals are comparing it to the libraries listed below
Sorting:
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 10 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated 2 weeks ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆42Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Updated last year
- ☆50Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 9 months ago
- Train, tune, and infer Bamba model☆130Updated last month
- ☆80Updated 6 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆33Updated 4 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated 2 weeks ago
- Long Context Extension and Generalization in LLMs☆57Updated 9 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- Code for "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆74Updated 9 months ago
- Make reasoning models scalable☆40Updated last month
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 9 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 2 months ago
- A repository for research on medium sized language models.☆77Updated last year
- Verifiers for LLM Reinforcement Learning☆64Updated 3 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated 7 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆99Updated last week
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆144Updated 9 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆38Updated last year
- ☆95Updated 9 months ago
- The evaluation framework for training-free sparse attention in LLMs☆82Updated 3 weeks ago
- ☆96Updated 9 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆56Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆150Updated 3 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 9 months ago
- ☆82Updated 10 months ago