allenai / OLMoE
OLMoE: Open Mixture-of-Experts Language Models
☆435Updated this week
Related projects ⓘ
Alternatives and complementary repositories for OLMoE
- Large Reasoning Models☆457Updated this week
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆476Updated this week
- ☆488Updated 3 weeks ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆332Updated 2 months ago
- ☆283Updated last month
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆642Updated last month
- [ICML 2024] CLLMs: Consistency Large Language Models☆350Updated 3 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆351Updated 3 weeks ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆435Updated 7 months ago
- Code for Quiet-STaR☆639Updated 2 months ago
- The official evaluation suite and dynamic data release for MixEval.☆222Updated last week
- ☆445Updated last week
- FuseAI Project☆448Updated 2 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆279Updated this week
- [NeurIPS'24 Spotlight] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces in…☆776Updated this week
- [ACL 2024] Progressive LLaMA with Block Expansion.☆479Updated 5 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆292Updated 10 months ago
- DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆346Updated last week
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆610Updated 5 months ago
- RewardBench: the first evaluation tool for reward models.☆424Updated 2 weeks ago
- The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Mem…☆299Updated 6 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,000Updated 9 months ago
- Expert Specialized Fine-Tuning☆143Updated last month
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆279Updated 6 months ago
- Generative Representational Instruction Tuning☆562Updated this week
- Official repository for ORPO☆420Updated 5 months ago
- Scalable toolkit for efficient model alignment☆611Updated this week
- An Open Source Toolkit For LLM Distillation☆350Updated last month
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆701Updated this week
- ☆211Updated 3 months ago