hhnqqq / MyTransformersLinks
This repository provides a comprehensive library for parallel training and LoRA algorithm implementations, supporting multiple parallel strategies and a rich collection of LoRA variants. It serves as a flexible and efficient model fine-tuning toolkit for researchers and developers. Please contact hehn@mail.ustc.edu.cn for detailed information.
☆49Updated last month
Alternatives and similar repositories for MyTransformers
Users that are interested in MyTransformers are comparing it to the libraries listed below
Sorting:
- A Collection of Papers on Diffusion Language Models☆131Updated last month
- The most open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.☆330Updated last week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆86Updated 8 months ago
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆117Updated 3 months ago
- Paper List of Inference/Test Time Scaling/Computing☆313Updated last month
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆313Updated this week
- [TMLR 2025] Efficient Reasoning Models: A Survey☆266Updated 2 weeks ago
- ✈️ [ICCV 2025] Towards Stabilized and Efficient Diffusion Transformers through Long-Skip-Connections with Spectral Constraints☆75Updated 3 months ago
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆90Updated 6 months ago
- ☆246Updated 3 weeks ago
- 📚 Collection of token-level model compression resources.☆172Updated last month
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆163Updated 3 weeks ago
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆105Updated 4 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆103Updated 11 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆135Updated 3 months ago
- ☆25Updated last month
- [EMNLP 2025 main 🔥] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆79Updated this week
- A tiny paper rating web☆39Updated 6 months ago
- One-shot Entropy Minimization☆185Updated 4 months ago
- Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"☆65Updated 2 weeks ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆71Updated 9 months ago
- Code release for VTW (AAAI 2025 Oral)☆50Updated 2 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆553Updated this week
- TraceRL & TraDo-8B: Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models☆260Updated 2 weeks ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆164Updated last month
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆171Updated 4 months ago
- A collection of papers on discrete diffusion models☆163Updated 3 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆79Updated 3 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆154Updated 3 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆150Updated 2 weeks ago