JayZhang42 / SLEDLinks
SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433
☆116Updated last year
Alternatives and similar repositories for SLED
Users that are interested in SLED are comparing it to the libraries listed below
Sorting:
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated last year
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆142Updated 3 months ago
- This is the official repository for Inheritune.☆119Updated 11 months ago
- ☆85Updated 2 months ago
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆68Updated 9 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 6 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆38Updated last year
- [ICLR 2025 Oral] "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆86Updated last year
- Official Code Repository for the paper "Distilling LLM Agent into Small Models with Retrieval and Code Tools"☆186Updated 2 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆123Updated last year
- ☆52Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- ☆50Updated 11 months ago
- ☆82Updated last month
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆210Updated last month
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Updated 5 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- ☆52Updated 7 months ago
- ☆162Updated last year
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆75Updated 6 months ago
- ☆97Updated last week
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 11 months ago
- Exploration of automated dataset selection approaches at large scales.☆53Updated 10 months ago
- ☆143Updated 4 months ago
- Process Reward Models That Think☆73Updated last month
- An automated data pipeline scaling RL to pretraining levels☆72Updated 3 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆74Updated 6 months ago
- [Technical Report] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with …☆63Updated last year
- a curated list of the role of small models in the LLM era☆111Updated last year