WalkerMitty / Fast-Llama2Links
Fast instruction tuning with Llama2
☆11Updated last year
Alternatives and similar repositories for Fast-Llama2
Users that are interested in Fast-Llama2 are comparing it to the libraries listed below
Sorting:
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- OpenLLMDE: An open source data engineering framework for LLMs☆18Updated 2 years ago
- 大语言模型训练和服务调研☆36Updated 2 years ago
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆67Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- Summarize all open source Large Languages Models and low-cost replication methods for Chatgpt.☆137Updated 2 years ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- 多轮共情对话模型PICA☆97Updated 2 years ago
- a toolkit on knowledge distillation for large language models☆156Updated last week
- 官方transformers源码解析。AI大模型时代,pytorch、transformer是新操作系统,其他都是运行在其上面的软件。☆17Updated last year
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆44Updated 7 months ago
- Code for "An Empirical Study of Retrieval Augmented Generation with Chain-of-Thought"☆16Updated last year
- LLM+RAG for QA☆23Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- Deepseek-r1复现科普与资源汇总☆22Updated 6 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- ☆161Updated last year
- ☆125Updated last year
- 一套代码指令微调大模型☆39Updated 2 years ago
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆60Updated last year
- GRAIN: Gradient-based Intra-attention Pruning on Pre-trained Language Models☆19Updated 2 years ago
- This repository provides an implementation of "A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction B…☆77Updated 2 months ago
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆95Updated last year
- ☆48Updated last week
- 怎么训练一个LLM分词器☆152Updated 2 years ago
- Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b Mo…☆27Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Updated 2 years ago
- 天池算法比赛《BetterMixture - 大模型数据混合挑战赛》的第一名top1解决方案☆32Updated last year
- Scripts of LLM pre-training and fine-tuning (w/wo LoRA, DeepSpeed)☆84Updated last year