yegcjs / DiffusionLLMLinks
Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"
โ83Updated last year
Alternatives and similar repositories for DiffusionLLM
Users that are interested in DiffusionLLM are comparing it to the libraries listed below
Sorting:
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"โ105Updated last month
- [NeurIPS-2024] ๐ Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623โ89Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewardsโ44Updated 7 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)โ57Updated last year
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)โ70Updated last year
- Code for paper "Patch-Level Training for Large Language Models"โ95Updated 3 weeks ago
- Directional Preference Alignmentโ58Updated last year
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"โ39Updated last year
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuningโ33Updated 2 years ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"โ124Updated last year
- Large Language Models Can Self-Improve in Long-context Reasoningโ73Updated last year
- [๐๐๐๐๐ ๐ ๐ข๐ง๐๐ข๐ง๐ ๐ฌ ๐๐๐๐ & ๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐ซ๐๐ฅ] ๐๐ฏ๐ฉ๐ข๐ฏ๐ค๐ช๐ฏ๐จ ๐๐ข๐ต๐ฉ๐ฆ๐ฎ๐ข๐ต๐ช๐ค๐ข๐ญ ๐๐ฆ๐ข๐ด๐ฐ๐ฏ๐ช๐ฏโฆโ51Updated last year
- โ106Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignmentโ69Updated 2 years ago
- โ50Updated 2 years ago
- โ30Updated 2 years ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Modelsโ68Updated 9 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Modelsโ55Updated 9 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or reโฆโ37Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".โ62Updated 3 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)โ142Updated 4 months ago
- Long Context Extension and Generalization in LLMsโ62Updated last year
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":โ40Updated last year
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoningโ69Updated 4 months ago
- Extending context length of visual language modelsโ12Updated 11 months ago
- Preference Learning for LLaVAโ56Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Modelsโ78Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswalโฆโ56Updated 2 years ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"โ38Updated last year
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"โ34Updated last year