THUDM / SwissArmyTransformerLinks
SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
☆1,086Updated 7 months ago
Alternatives and similar repositories for SwissArmyTransformer
Users that are interested in SwissArmyTransformer are comparing it to the libraries listed below
Sorting:
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,619Updated last year
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆861Updated 3 months ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,079Updated last year
- [NIPS2023] RRHF & Wombat☆811Updated last year
- LOMO: LOw-Memory Optimization☆989Updated last year
- Emu Series: Generative Multimodal Models from BAAI☆1,742Updated 10 months ago
- ☆906Updated 2 years ago
- Rotary Transformer☆1,009Updated 3 years ago
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"☆1,432Updated 2 years ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆604Updated 2 months ago
- huggingface mirror download☆585Updated 4 months ago
- A fast MoE impl for PyTorch☆1,777Updated 6 months ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,032Updated 11 months ago
- Collaborative Training of Large Language Models in an Efficient Way☆416Updated 11 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,411Updated last year
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,055Updated last year
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆769Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆444Updated 10 months ago
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆609Updated 7 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆981Updated 8 months ago
- Rectified Rotary Position Embeddings☆381Updated last year
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,511Updated 4 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,142Updated last week
- ☆760Updated last year
- Best practice for training LLaMA models in Megatron-LM☆660Updated last year
- ☆922Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 4 months ago
- 更纯粹、更高压缩率的Tokenizer☆481Updated 8 months ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆649Updated 7 months ago
- Next-Token Prediction is All You Need☆2,178Updated 5 months ago