deepspeedai / DeepSpeedExamplesLinks
Example models using DeepSpeed
☆6,701Updated 2 weeks ago
Alternatives and similar repositories for DeepSpeedExamples
Users that are interested in DeepSpeedExamples are comparing it to the libraries listed below
Sorting:
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆40,538Updated this week
- Instruction Tuning with GPT-4☆4,335Updated 2 years ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,507Updated 2 years ago
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,681Updated 2 years ago
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.☆8,473Updated 2 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,910Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆19,900Updated this week
- Train transformer language models with reinforcement learning.☆16,012Updated this week
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,772Updated last year
- GLM (General Language Model)☆3,319Updated last year
- BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)☆8,245Updated last year
- A large-scale 7B pretraining language model developed by BaiChuan-Inc.☆5,684Updated last year
- ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡☆2,942Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,716Updated last year
- Transformer related optimization, including BERT, GPT☆6,331Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,065Updated last week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,719Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,178Updated 2 months ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,079Updated 3 months ago
- Let ChatGPT teach your own chatbot in hours with a single GPU!☆3,167Updated last year
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆12,843Updated 10 months ago
- Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca☆4,146Updated 6 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,687Updated last week
- An open-source framework for training large multimodal models.☆4,032Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,977Updated 6 months ago
- 骆驼(Luotuo): Open Sourced Chinese Language Models. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技☆3,626Updated 2 years ago
- Chinese-LLaMA 1&2、Chinese-Falcon 基础模型;ChatFlow中文对话模型;中文OpenLLaMA模型;NLP预训练/指令微调数据集☆3,057Updated last year
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,157Updated 4 months ago
- Ongoing research training transformer models at scale☆13,976Updated this week
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,204Updated last week