THUDM / SwissArmyTransformerLinks
SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
☆1,109Updated last year
Alternatives and similar repositories for SwissArmyTransformer
Users that are interested in SwissArmyTransformer are comparing it to the libraries listed below
Sorting:
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆863Updated 8 months ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,086Updated last year
- huggingface mirror download☆588Updated 9 months ago
- [NIPS2023] RRHF & Wombat☆809Updated 2 years ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆615Updated 2 months ago
- Rotary Transformer☆1,069Updated 3 years ago
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,613Updated 2 years ago
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"☆1,464Updated 2 years ago
- ☆901Updated 2 years ago
- Emu Series: Generative Multimodal Models from BAAI☆1,762Updated last year
- LOMO: LOw-Memory Optimization☆991Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆445Updated last year
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,072Updated 2 years ago
- ☆459Updated last year
- real Transformer TeraFLOPS on various GPUs☆917Updated 2 years ago
- Multimodal-GPT☆1,516Updated 2 years ago
- Collaborative Training of Large Language Models in an Efficient Way☆416Updated last year
- Rectified Rotary Position Embeddings☆385Updated last year
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,040Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,426Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,003Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 8 months ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆654Updated last year
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆773Updated 2 years ago
- XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.☆645Updated last year
- Efficient Inference for Big Models☆585Updated 2 years ago
- The official GitHub page for the review paper "Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision M…☆506Updated last year
- 更纯粹、更高压缩率的Tokenizer☆488Updated last year
- A fast MoE impl for PyTorch☆1,827Updated 11 months ago
- Yuan 2.0 Large Language Model☆690Updated last year