nyunAI / Faster-LLM-SurveyLinks
☆42Updated last year
Alternatives and similar repositories for Faster-LLM-Survey
Users that are interested in Faster-LLM-Survey are comparing it to the libraries listed below
Sorting:
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆58Updated last year
- ☆47Updated 9 months ago
- Code for studying the super weight in LLM☆104Updated 6 months ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- Long Context Extension and Generalization in LLMs☆56Updated 8 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 4 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 11 months ago
- ☆125Updated last year
- Prune transformer layers☆69Updated last year
- ☆37Updated 9 months ago
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆52Updated last month
- ☆64Updated last year
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆44Updated 9 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆37Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆161Updated 11 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 8 months ago
- Verifiers for LLM Reinforcement Learning☆56Updated last month
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆95Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 2 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆173Updated 2 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆42Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- Codebase accompanying the Summary of a Haystack paper.☆78Updated 8 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆124Updated last year
- ☆72Updated last month
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆23Updated 11 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆96Updated last year
- ☆129Updated 3 months ago
- ☆34Updated 11 months ago