Tomorrowdawn / top_nsigma
The official code repo and data hub of top_nsigma sampling strategy for LLMs.
☆20Updated last week
Alternatives and similar repositories for top_nsigma:
Users that are interested in top_nsigma are comparing it to the libraries listed below
- A repository for research on medium sized language models.☆76Updated 8 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- ☆31Updated 8 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆76Updated last year
- [ICLR 2025] SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights☆53Updated last week
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- ☆75Updated last month
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 3 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆46Updated 2 months ago
- FuseAI Project☆83Updated 3 weeks ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆140Updated 5 months ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆69Updated 2 months ago
- Reformatted Alignment☆114Updated 4 months ago
- This is the official repository for Inheritune.☆109Updated last week
- ☆71Updated 6 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆72Updated 8 months ago
- [NeurIPS2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆92Updated 2 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆81Updated 11 months ago
- ☆12Updated last month
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated 11 months ago
- ☆53Updated 4 months ago
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆28Updated 10 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆42Updated 2 months ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆26Updated 7 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆54Updated 5 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated last week