gregorbachmann / Next-Token-Failures
☆78Updated 10 months ago
Alternatives and similar repositories for Next-Token-Failures:
Users that are interested in Next-Token-Failures are comparing it to the libraries listed below
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆112Updated 4 months ago
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆65Updated last year
- ☆43Updated 5 months ago
- Directional Preference Alignment☆54Updated 4 months ago
- ☆86Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 4 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆69Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆49Updated 7 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆25Updated 9 months ago
- Long Context Extension and Generalization in LLMs☆40Updated 4 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆76Updated 4 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated last year
- Sparse Backpropagation for Mixture-of-Expert Training☆27Updated 6 months ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆43Updated last year
- [ICLR2025] DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Models☆79Updated 2 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆135Updated 3 months ago
- Stick-breaking attention☆41Updated 2 weeks ago
- Test-time-training on nearest neighbors for large language models☆37Updated 9 months ago
- [NeurIPS 2024] Code for the paper "Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models"☆96Updated 11 months ago
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆59Updated 5 months ago
- ☆51Updated 8 months ago
- ☆18Updated 8 months ago
- ☆81Updated last year
- ☆47Updated 2 months ago
- ☆27Updated 2 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆42Updated 6 months ago
- ☆75Updated 5 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 9 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆67Updated 2 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆22Updated 4 months ago