hhnqqq / MyTransformersLinks
This repository provides a comprehensive library for parallel training and LoRA algorithm implementations, supporting multiple parallel strategies and a rich collection of LoRA variants. It serves as a flexible and efficient model fine-tuning toolkit for researchers and developers. Please contact hehn@mail.ustc.edu.cn for detailed information.
☆48Updated this week
Alternatives and similar repositories for MyTransformers
Users that are interested in MyTransformers are comparing it to the libraries listed below
Sorting:
- Paper List of Inference/Test Time Scaling/Computing☆289Updated last month
- A Collection of Papers on Diffusion Language Models☆98Updated this week
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆96Updated last month
- [arXiv 2025] Efficient Reasoning Models: A Survey☆247Updated 3 weeks ago
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆271Updated last week
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆86Updated 4 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆138Updated 2 months ago
- A tiny paper rating web☆39Updated 4 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆82Updated 5 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆121Updated last month
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆69Updated 6 months ago
- A collection of papers on discrete diffusion models☆156Updated last month
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆134Updated this week
- One-shot Entropy Minimization☆175Updated last month
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆98Updated 9 months ago
- 📚 Collection of token-level model compression resources.☆147Updated last month
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆153Updated last month
- ☆194Updated this week
- Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"☆54Updated 2 weeks ago
- Code release for VTW (AAAI 2025 Oral)☆47Updated 3 weeks ago
- Survey: https://arxiv.org/pdf/2507.20198☆69Updated last week
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆220Updated 8 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆77Updated last month
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆142Updated last month
- ✈️ [ICCV 2025] Towards Stabilized and Efficient Diffusion Transformers through Long-Skip-Connections with Spectral Constraints☆72Updated last month
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆105Updated last month
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆117Updated 5 months ago
- Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆64Updated 3 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆469Updated 7 months ago
- ☆103Updated last month