THUDM / SwissArmyTransformer
SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
☆1,054Updated last month
Alternatives and similar repositories for SwissArmyTransformer:
Users that are interested in SwissArmyTransformer are comparing it to the libraries listed below
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆860Updated 2 months ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,013Updated 5 months ago
- [NIPS2023] RRHF & Wombat☆799Updated last year
- ☆902Updated last year
- Efficient Training (including pre-training and fine-tuning) for Big Models☆577Updated 7 months ago
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"☆1,397Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,618Updated last year
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,059Updated 6 months ago
- LOMO: LOw-Memory Optimization☆980Updated 7 months ago
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,006Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆411Updated 5 months ago
- A fast MoE impl for PyTorch☆1,627Updated last week
- Emu Series: Generative Multimodal Models from BAAI☆1,683Updated 4 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆922Updated 2 months ago
- A family of lightweight multimodal models.☆987Updated 3 months ago
- Rotary Transformer☆895Updated 2 years ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,318Updated 11 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,364Updated 11 months ago
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,690Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,990Updated 2 weeks ago
- 更纯粹、更高压缩率的Tokenizer☆471Updated 2 months ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆637Updated last month
- An open-source framework for training large multimodal models.☆3,822Updated 5 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,127Updated 11 months ago
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆440Updated 4 months ago
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆594Updated 3 weeks ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,646Updated 6 months ago
- Next-Token Prediction is All You Need☆2,004Updated 3 months ago
- ☆903Updated 8 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,449Updated 11 months ago