THUDM / SwissArmyTransformerLinks
SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
☆1,079Updated 5 months ago
Alternatives and similar repositories for SwissArmyTransformer
Users that are interested in SwissArmyTransformer are comparing it to the libraries listed below
Sorting:
- Rotary Transformer☆959Updated 3 years ago
- [NIPS2023] RRHF & Wombat☆809Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,619Updated last year
- Efficient Training (including pre-training and fine-tuning) for Big Models☆592Updated this week
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆860Updated 3 weeks ago
- Emu Series: Generative Multimodal Models from BAAI☆1,723Updated 8 months ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,071Updated 9 months ago
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"☆1,420Updated 2 years ago
- LOMO: LOw-Memory Optimization☆984Updated 11 months ago
- Collaborative Training of Large Language Models in an Efficient Way☆415Updated 9 months ago
- Rectified Rotary Position Embeddings☆367Updated last year
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,028Updated 8 months ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,481Updated 2 months ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,241Updated 2 years ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆642Updated 5 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆962Updated 5 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,391Updated last year
- Secrets of RLHF in Large Language Models Part I: PPO☆1,364Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,501Updated last year
- X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages☆310Updated last year
- Best practice for training LLaMA models in Megatron-LM☆654Updated last year
- huggingface mirror download☆580Updated last month
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆611Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,171Updated last year
- ☆903Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆895Updated 3 months ago
- ☆914Updated last year
- 更纯粹、更高压缩率的Tokenizer☆481Updated 6 months ago
- ☆459Updated 11 months ago
- ☆778Updated 10 months ago