JL-er / MiSSLinks
MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an excellent balance between performance and efficiency.
☆20Updated 2 weeks ago
Alternatives and similar repositories for MiSS
Users that are interested in MiSS are comparing it to the libraries listed below
Sorting:
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆46Updated 2 weeks ago
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆74Updated 3 weeks ago
- Repository for "TESS-2: A Large-Scale, Generalist Diffusion Language Model"☆46Updated 5 months ago
- ☆38Updated 3 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆55Updated 4 months ago
- ☆83Updated 6 months ago
- RWKV, in easy to read code☆72Updated 4 months ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆82Updated 2 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆160Updated 3 months ago
- A large-scale RWKV v6, v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to de…☆40Updated last week
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆51Updated 4 months ago
- DPO, but faster 🚀☆44Updated 8 months ago
- RWKV-7: Surpassing GPT☆94Updated 8 months ago
- ☆14Updated 7 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆34Updated 4 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year
- A fork of the PEFT library, supporting Robust Adaptation (RoSA)☆14Updated 11 months ago
- Here we will test various linear attention designs.☆62Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆86Updated 8 months ago
- QuIP quantization☆54Updated last year
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆118Updated last month
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated 11 months ago
- ☆19Updated 7 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆112Updated 9 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated last month
- ☆60Updated 4 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 3 months ago
- ☆51Updated 9 months ago
- ☆52Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆88Updated last year