microsoft / RedStone
The RedStone repository includes code for preparing extensive datasets used in training large language models.
☆34Updated last month
Alternatives and similar repositories for RedStone:
Users that are interested in RedStone are comparing it to the libraries listed below
- PyTorch building blocks for OLMo☆49Updated this week
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆157Updated 3 weeks ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated this week
- The code for the paper: "Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models"☆54Updated 6 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- ☆55Updated 2 weeks ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆67Updated 9 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆131Updated 3 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆43Updated 7 months ago
- Code implementation of synthetic continued pretraining☆82Updated 3 weeks ago
- Leveraging passage embeddings for efficient listwise reranking with large language models.☆36Updated last month
- Reformatted Alignment☆113Updated 4 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 4 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆150Updated last month
- ☆38Updated 9 months ago
- Official implementation of paper "Autonomous Data Selection with Language Models for Mathematical Texts" (As Huggingface Daily Papers: ht…☆79Updated 2 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆48Updated 3 months ago
- ☆66Updated last month
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆53Updated 5 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆64Updated 5 months ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆27Updated last month
- ☆53Updated 3 months ago
- ☆98Updated last month
- Benchmarking Benchmark Leakage in Large Language Models☆47Updated 8 months ago
- This is the official repository of the paper "OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI"☆91Updated last month
- Reproducible, flexible LLM evaluations☆129Updated last month
- ☆47Updated 2 months ago
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆30Updated 3 weeks ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆71Updated 7 months ago