hhnqqq / MyTransformersLinks
This repository provides a comprehensive library for parallel training and LoRA algorithm implementations, supporting multiple parallel strategies and a rich collection of LoRA variants. It serves as a flexible and efficient model fine-tuning toolkit for researchers and developers. Please contact hehn@mail.ustc.edu.cn for detailed information.
☆48Updated last week
Alternatives and similar repositories for MyTransformers
Users that are interested in MyTransformers are comparing it to the libraries listed below
Sorting:
- A Collection of Papers on Diffusion Language Models☆119Updated last week
- Paper List of Inference/Test Time Scaling/Computing☆297Updated this week
- One-shot Entropy Minimization☆180Updated 2 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆99Updated 9 months ago
- [arXiv 2025] Efficient Reasoning Models: A Survey☆259Updated last week
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆108Updated 2 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆84Updated 6 months ago
- ✈️ [ICCV 2025] Towards Stabilized and Efficient Diffusion Transformers through Long-Skip-Connections with Spectral Constraints☆72Updated last month
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆279Updated 3 weeks ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆154Updated 2 months ago
- ☆218Updated 3 weeks ago
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆87Updated 5 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆143Updated 3 weeks ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆402Updated last week
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆69Updated 7 months ago
- ☆67Updated last month
- A tiny paper rating web☆39Updated 5 months ago
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆121Updated last week
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆222Updated 9 months ago
- Code release for VTW (AAAI 2025 Oral)☆49Updated last month
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆126Updated last month
- 📚 Collection of token-level model compression resources.☆155Updated last week
- ☆100Updated 4 months ago
- Survey: https://arxiv.org/pdf/2507.20198☆121Updated last week
- ☆163Updated 3 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆142Updated 2 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆139Updated 4 months ago
- Long-RL: Scaling RL to Long Sequences☆597Updated 2 weeks ago
- [EMNLP 2025 main] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆67Updated last week
- A collection of papers on discrete diffusion models☆158Updated 2 months ago