hhnqqq / MyTransformersLinks
This repository provides a comprehensive library for parallel training and LoRA algorithm implementations, supporting multiple parallel strategies and a rich collection of LoRA variants. It serves as a flexible and efficient model fine-tuning toolkit for researchers and developers. Please contact hehn@mail.ustc.edu.cn for detailed information.
☆52Updated this week
Alternatives and similar repositories for MyTransformers
Users that are interested in MyTransformers are comparing it to the libraries listed below
Sorting:
- A Collection of Papers on Diffusion Language Models☆145Updated 2 months ago
- Paper List of Inference/Test Time Scaling/Computing☆325Updated 3 months ago
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆133Updated 5 months ago
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆92Updated 7 months ago
- One-shot Entropy Minimization☆187Updated 5 months ago
- ☆31Updated 2 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆185Updated last week
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆75Updated last month
- [EMNLP 2025 main 🔥] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆90Updated last month
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆104Updated last year
- ✈ ️ [ICCV 2025] Towards Stabilized and Efficient Diffusion Transformers through Long-Skip-Connections with Spectral Constraints☆76Updated 4 months ago
- ☆283Updated last month
- [TMLR 2025] Efficient Reasoning Models: A Survey☆280Updated last month
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆331Updated last month
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆145Updated 4 months ago
- ☆20Updated 6 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆87Updated 9 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆91Updated 10 months ago
- 📚 Collection of token-level model compression resources.☆182Updated 2 months ago
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆202Updated 2 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆134Updated 8 months ago
- TraceRL & TraDo-8B: Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models☆327Updated last week
- Code release for VTW (AAAI 2025 Oral)☆64Updated 3 weeks ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆82Updated 5 months ago
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆119Updated 6 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆231Updated 11 months ago
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆205Updated last month
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆161Updated 5 months ago
- A tiny paper rating web☆38Updated 8 months ago
- A python script for downloading huggingface datasets and models.☆20Updated 7 months ago