SalesforceAIResearch / MobileAIBenchLinks
☆24Updated 3 months ago
Alternatives and similar repositories for MobileAIBench
Users that are interested in MobileAIBench are comparing it to the libraries listed below
Sorting:
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆126Updated last year
- ☆273Updated 2 years ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- ☆208Updated 2 years ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆111Updated 11 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆83Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆119Updated 2 years ago
- ☆143Updated last year
- Learning adapter weights from task descriptions☆19Updated 2 years ago
- ☆99Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33Updated last year
- ☆130Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆60Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆79Updated last year
- ☆62Updated 8 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆144Updated 2 years ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- We have released the code and demo program required for LLM with self-verification☆62Updated 2 years ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆163Updated 2 months ago
- ☆74Updated last year
- The code for the paper: "Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models"☆56Updated 3 months ago
- awesome-LLM-controlled-constrained-generation☆56Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆156Updated 2 years ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆120Updated last week
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year