THUDM / SwissArmyTransformerLinks
SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
☆1,080Updated 5 months ago
Alternatives and similar repositories for SwissArmyTransformer
Users that are interested in SwissArmyTransformer are comparing it to the libraries listed below
Sorting:
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,618Updated last year
- [NIPS2023] RRHF & Wombat☆808Updated last year
- LOMO: LOw-Memory Optimization☆987Updated 11 months ago
- Collaborative Training of Large Language Models in an Efficient Way☆415Updated 9 months ago
- Rotary Transformer☆970Updated 3 years ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆596Updated 3 weeks ago
- Emu Series: Generative Multimodal Models from BAAI☆1,730Updated 8 months ago
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"☆1,424Updated 2 years ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆860Updated last month
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,395Updated last year
- A fast MoE impl for PyTorch☆1,744Updated 4 months ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,073Updated 10 months ago
- ☆904Updated 2 years ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,028Updated 9 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆967Updated 6 months ago
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,042Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆842Updated this week
- real Transformer TeraFLOPS on various GPUs☆905Updated last year
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,751Updated last year
- ☆459Updated last year
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆605Updated 4 months ago
- 更纯粹、更高压缩率的Tokenizer☆480Updated 6 months ago
- Multimodal-GPT☆1,502Updated 2 years ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆640Updated 11 months ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆244Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,180Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆617Updated last year
- X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages☆312Updated last year
- ☆916Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,023Updated 8 months ago