WalkerMitty / Fast-Llama2Links
Fast instruction tuning with Llama2
☆11Updated last year
Alternatives and similar repositories for Fast-Llama2
Users that are interested in Fast-Llama2 are comparing it to the libraries listed below
Sorting:
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆68Updated last year
- OpenLLMDE: An open source data engineering framework for LLMs☆18Updated 2 years ago
- 大语言模型训练和服务调研☆36Updated 2 years ago
- Deepseek-r1复现科普与资源汇总☆22Updated 7 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- Code for "An Empirical Study of Retrieval Augmented Generation with Chain-of-Thought"☆17Updated last year
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- ☆125Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- Summarize all open source Large Languages Models and low-cost replication methods for Chatgpt.☆137Updated 2 years ago
- ☆90Updated 5 months ago
- Scripts of LLM pre-training and fine-tuning (w/wo LoRA, DeepSpeed)☆85Updated last year
- ☆33Updated 7 months ago
- Music large model based on InternLM2-chat.☆22Updated 10 months ago
- LLM+RAG for QA☆23Updated last year
- A wide variety of research projects developed by the SpokenNLP team of Speech Lab, Alibaba Group.☆118Updated 5 months ago
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆98Updated last year
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆60Updated last year
- A Toolkit for Table-based Question Answering☆114Updated 2 years ago
- a toolkit on knowledge distillation for large language models☆191Updated 2 weeks ago
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆60Updated last year
- 多轮共情对话模型PICA☆96Updated 2 years ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Updated 2 years ago
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆95Updated last year
- GRAIN: Gradient-based Intra-attention Pruning on Pre-trained Language Models☆19Updated 2 years ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Updated 8 months ago
- This repository is the official implementation of the ECAI 2024 conference paper SUBLLM: A Novel Efficient Architecture with Token Sequen…☆69Updated last year