mistralai / mistral-evals
☆67Updated last week
Alternatives and similar repositories for mistral-evals:
Users that are interested in mistral-evals are comparing it to the libraries listed below
- ☆87Updated 6 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 6 months ago
- Long Context Extension and Generalization in LLMs☆50Updated 6 months ago
- ☆74Updated 7 months ago
- ☆25Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆39Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆141Updated 6 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆80Updated 6 months ago
- Exploration of automated dataset selection approaches at large scales.☆34Updated 3 weeks ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆55Updated 7 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆31Updated last month
- ☆47Updated 7 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated last week
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 9 months ago
- Using FlexAttention to compute attention with different masking patterns☆42Updated 6 months ago
- Train, tune, and infer Bamba model☆87Updated 2 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆35Updated 11 months ago
- Work in progress.☆50Updated 2 weeks ago
- ☆39Updated last month
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆44Updated 8 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆44Updated last month
- ☆31Updated 2 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 6 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆170Updated 3 weeks ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆71Updated 11 months ago
- ☆65Updated 4 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆71Updated 5 months ago
- A repository for research on medium sized language models.☆76Updated 10 months ago