THUDM / SwissArmyTransformer
SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
☆991Updated last week
Related projects ⓘ
Alternatives and complementary repositories for SwissArmyTransformer
- [NIPS2023] RRHF & Wombat☆797Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆411Updated 2 months ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆996Updated last month
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,028Updated 3 months ago
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆1,981Updated 11 months ago
- Rotary Transformer☆811Updated 2 years ago
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,607Updated last year
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,000Updated 9 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆560Updated 3 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆852Updated 7 months ago
- LOMO: LOw-Memory Optimization☆978Updated 4 months ago
- A fast MoE impl for PyTorch☆1,560Updated 4 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,335Updated 7 months ago
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"☆1,368Updated last year
- ☆451Updated 5 months ago
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆575Updated 3 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,659Updated last month
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,885Updated 3 weeks ago
- 🩹Editing large language models within 10 seconds⚡☆1,281Updated last year
- ☆707Updated 4 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆880Updated 4 months ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆299Updated 3 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,074Updated 7 months ago
- ☆887Updated 5 months ago
- X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages☆305Updated last year
- [ACL 2024] LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding☆657Updated last month
- huggingface mirror download☆551Updated this week
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆275Updated last year
- Best practice for training LLaMA models in Megatron-LM☆627Updated 10 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆479Updated 5 months ago