ducdauge / sft-llmLinks
Scaling Sparse Fine-Tuning to Large Language Models
☆16Updated last year
Alternatives and similar repositories for sft-llm
Users that are interested in sft-llm are comparing it to the libraries listed below
Sorting:
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated last month
- triton ver of gqa flash attn, based on the tutorial☆11Updated 10 months ago
- ☆18Updated 2 years ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆21Updated 6 months ago
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆38Updated 3 weeks ago
- ☆18Updated 11 months ago
- ☆20Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆86Updated 6 months ago
- ☆31Updated last year
- ☆25Updated last year
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 8 months ago
- Official implementation of "The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs"☆32Updated last month
- Codebase for Instruction Following without Instruction Tuning☆34Updated 8 months ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆29Updated last week
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆24Updated last year
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆39Updated 3 weeks ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆16Updated last month
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆18Updated 7 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆38Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆21Updated 9 months ago
- ☆21Updated last year
- ☆14Updated 2 years ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated 11 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆42Updated last year
- ☆31Updated last year
- ☆31Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆11Updated 6 months ago