owenliang / bpe-tokenizerLinks
LLM Tokenizer with BPE algorithm
☆32Updated last year
Alternatives and similar repositories for bpe-tokenizer
Users that are interested in bpe-tokenizer are comparing it to the libraries listed below
Sorting:
- 通义千问的DPO训练☆50Updated 9 months ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆64Updated 10 months ago
- ☆111Updated last year
- DeepSpeed Tutorial☆98Updated 11 months ago
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆122Updated 8 months ago
- 一些大语言模型和多模态模型的生态,主要包括跨模态搜索、投机解码、QAT量化、多模态量化、ChatBot、OCR☆184Updated 3 weeks ago
- ☆44Updated 4 months ago
- TinyRAG☆314Updated 2 weeks ago
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆63Updated last year
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆328Updated 11 months ago
- simple decoder-only GTP model in pytorch☆41Updated last year
- 大语言模型应用:RAG、NL2SQL、聊天机器人、预训练、MOE混合专家模型、微调训练、强化学习、天池数据竞赛☆64Updated 5 months ago
- ☆86Updated 9 months ago
- ThinkLLM:🚀 轻量、高效的大语言模型算法实现☆78Updated 2 months ago
- 快速入门RAG与私有化部署☆193Updated last year
- 天池算法比赛《BetterMixture - 大模型数据混合挑战赛》的第一名top1解决方案☆31Updated last year
- 使用单个24G显卡,从0开始训练LLM☆56Updated this week
- LLM101n: Let's build a Storyteller 中文版☆131Updated 10 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆452Updated 2 months ago
- 包含程序员面试大厂面试题和面试经验☆141Updated last month
- 基于大语言模型的检索增强生成RAG示例☆152Updated 2 months ago
- 从0开始,将chatgpt的技术路线跑一遍。☆243Updated 10 months ago
- This is a detailed code demo on how to conduct Full-Param Supervised Fine-tuning (SFT) and DPO (Direct Preference Optimization)☆15Updated 6 months ago
- ☆73Updated last month
- ☆83Updated 5 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆169Updated last year
- ☆111Updated 8 months ago
- 对llama3进行全参微调、lora微调以及qlora微调。☆202Updated 9 months ago
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆155Updated 8 months ago
- 一些 LLM 方面的从零复现笔记☆205Updated 2 months ago