yangjianxin1 / unsloth
Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory
☆25Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for unsloth
- 百度QA100万数据集☆49Updated 11 months ago
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆36Updated 2 months ago
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- SUS-Chat: Instruction tuning done right☆47Updated 10 months ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆36Updated 6 months ago
- 如需体验textin文档解析,请点击https://cc.co/16YSIy