ducdauge / sft-llm
Scaling Sparse Fine-Tuning to Large Language Models
☆16Updated last year
Alternatives and similar repositories for sft-llm:
Users that are interested in sft-llm are comparing it to the libraries listed below
- ☆19Updated 2 years ago
- ☆52Updated 8 months ago
- triton ver of gqa flash attn, based on the tutorial☆11Updated 7 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated 2 years ago
- ☆25Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆27Updated last year
- ☆18Updated 10 months ago
- Using FlexAttention to compute attention with different masking patterns☆42Updated 6 months ago
- ☆20Updated last year
- ☆33Updated last year
- ☆14Updated 2 years ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 6 months ago
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆36Updated last week
- ☆30Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆22Updated 7 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆44Updated this week
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆36Updated last year
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆30Updated 2 weeks ago
- ☆48Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆81Updated 4 months ago
- ☆64Updated 11 months ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆20Updated 7 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 6 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆46Updated last month
- Exploration of automated dataset selection approaches at large scales.☆34Updated 3 weeks ago
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆24Updated last year
- Staged Training for Transformer Language Models☆32Updated 3 years ago
- ☆12Updated last year
- The paper list of multilingual pre-trained models (Continual Updated).☆20Updated 9 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆11Updated 4 months ago