AlignInc / aligner-replicationLinks
The reproduct of the paper - Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction
☆22Updated last year
Alternatives and similar repositories for aligner-replication
Users that are interested in aligner-replication are comparing it to the libraries listed below
Sorting:
- Challenge LLMs to Reason About Reasoning: A Benchmark to Unveil Cognitive Depth in LLMs☆50Updated last year
- o1 Chain of Thought Examples☆33Updated 9 months ago
- FuseAI Project☆87Updated 5 months ago
- A repository for research on medium sized language models.☆77Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Reformatted Alignment☆113Updated 9 months ago
- ☆63Updated 9 months ago
- This is the official repository for Inheritune.☆112Updated 5 months ago
- ☆36Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 8 months ago
- ☆64Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆27Updated 4 months ago
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆99Updated 2 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆56Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- ☆19Updated 4 months ago
- LMTuner: Make the LLM Better for Everyone☆35Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated 2 weeks ago
- GoldFinch and other hybrid transformer components☆46Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆145Updated 10 months ago
- ☆48Updated last month
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆28Updated last year
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated 2 weeks ago
- ☆34Updated last year
- ☆82Updated 6 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆79Updated last year
- On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆39Updated last week
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆26Updated 5 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆116Updated last year
- ☆24Updated 10 months ago