chunhuizhang / deeplearning-envsLinks
深度学习软硬件配置(小白向)
☆30Updated this week
Alternatives and similar repositories for deeplearning-envs
Users that are interested in deeplearning-envs are comparing it to the libraries listed below
Sorting:
- 最少使用 3090 即可训练自己的比特大脑(miniLLM)🧠(进行中). Train your own BitBrain(A mini LLM) with just an RTX 3090 minimum.☆22Updated this week
- ☆14Updated last year
- Tutorial for Ray☆25Updated last year
- 使用单个24G显卡,从0开始训练LLM☆56Updated last month
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- accelerate generating vector by using onnx model☆17Updated last year
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆63Updated last year
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆46Updated last week
- AFAC2024金融智能创新大赛☆43Updated 7 months ago
- Recursive Abstractive Processing for Tree-Organized Retrieval☆9Updated last year
- ChatGLM2-6B-Explained☆35Updated last year
- qwen models finetuning☆99Updated 3 months ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆64Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- 用Numpy复现可训练的LLaMa3☆34Updated 11 months ago
- 大语言模型训练和服务调研☆37Updated last year
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆63Updated 4 months ago
- baichuan and baichuan2 finetuning and alpaca finetuning☆32Updated 3 months ago
- ☆79Updated last year
- 大型语言模型实战指南:应用实践与场景落地☆73Updated 9 months ago
- ☆53Updated last week
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆53Updated 7 months ago
- A minimalist benchmarking tool designed to test the routine-generation capabilities of LLMs.☆25Updated 6 months ago
- ☆109Updated 7 months ago
- 2023全球智能汽车AI挑战赛——赛道一:AI大模型检索问答, 75+ baseline☆58Updated last year
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated 2 years ago
- 官方transformers源码解析。AI大模型时代,pytorch、transformer是新操作系统,其他都是运行在其上面的软件。☆17Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 7 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆57Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year