JL-er / MiSSLinks
MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an excellent balance between performance and efficiency.
☆21Updated last month
Alternatives and similar repositories for MiSS
Users that are interested in MiSS are comparing it to the libraries listed below
Sorting:
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆46Updated last month
- Repository for "TESS-2: A Large-Scale, Generalist Diffusion Language Model"☆48Updated 6 months ago
- ☆86Updated 7 months ago
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆76Updated last month
- Code for paper "Patch-Level Training for Large Language Models"☆86Updated 9 months ago
- A fork of the PEFT library, supporting Robust Adaptation (RoSA)☆15Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆42Updated this week
- RADLADS training code☆27Updated 3 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated last month
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆90Updated 2 months ago
- DPO, but faster 🚀☆44Updated 8 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆160Updated 4 months ago
- Evaluating LLMs with Dynamic Data☆91Updated last month
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year
- ☆38Updated 3 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 5 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆34Updated 5 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆115Updated 10 months ago
- RWKV, in easy to read code☆71Updated 5 months ago
- Here we will test various linear attention designs.☆62Updated last year
- Fast modular code to create and train cutting edge LLMs☆68Updated last year
- RWKV-7: Surpassing GPT☆94Updated 9 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆124Updated last week
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆51Updated 5 months ago
- ☆55Updated last month
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆102Updated this week
- ☆51Updated 9 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆87Updated 8 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year