UbiquitousLearning / SLM_SurveyLinks
☆100Updated last year
Alternatives and similar repositories for SLM_Survey
Users that are interested in SLM_Survey are comparing it to the libraries listed below
Sorting:
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆162Updated 7 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- ☆38Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year
- ☆46Updated 7 months ago
- a curated list of the role of small models in the LLM era☆110Updated last year
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆68Updated last year
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆51Updated last month
- FuseAI Project☆87Updated 10 months ago
- ☆79Updated 3 weeks ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆123Updated last year
- Verifiers for LLM Reinforcement Learning☆80Updated 7 months ago
- ☆70Updated last year
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆112Updated last year
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆51Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆212Updated 6 months ago
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆182Updated last year
- ☆48Updated last year
- ☆85Updated last month
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆153Updated last year
- ☆42Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆185Updated 3 weeks ago
- ☆204Updated last year
- A repository for research on medium sized language models.☆78Updated last year
- ☆128Updated last year
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆67Updated 8 months ago
- ☆128Updated 7 months ago