allenai / OLMo-core
PyTorch building blocks for the OLMo ecosystem
☆54Updated this week
Alternatives and similar repositories for OLMo-core:
Users that are interested in OLMo-core are comparing it to the libraries listed below
- ☆48Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆68Updated 10 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 10 months ago
- ☆38Updated 10 months ago
- ☆64Updated 10 months ago
- ☆47Updated 5 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated last week
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆57Updated 3 weeks ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆65Updated 6 months ago
- Long Context Extension and Generalization in LLMs☆48Updated 4 months ago
- ☆82Updated 4 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 4 months ago
- ☆31Updated 8 months ago
- A toolkit for scaling law research ⚖☆47Updated 3 weeks ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆77Updated 4 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 5 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year
- ☆72Updated 9 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆70Updated 3 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆78Updated 2 years ago
- ☆33Updated 3 months ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆31Updated 2 months ago
- A framework for few-shot evaluation of autoregressive language models.☆24Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆25Updated 10 months ago
- ☆125Updated last year
- ☆34Updated last year
- Replicating O1 inference-time scaling laws☆82Updated 2 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆52Updated 10 months ago