Tencent / TencentPretrainLinks
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
☆1,072Updated 10 months ago
Alternatives and similar repositories for TencentPretrain
Users that are interested in TencentPretrain are comparing it to the libraries listed below
Sorting:
- 人工精调的中文对话数据集和一段chatglm的微调代码☆1,181Updated 2 months ago
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆605Updated 5 months ago
- ChatGLM-6B 指令学习|指令数据|Instruct☆653Updated 2 years ago
- chatglm 6b finetuning and alpaca finetuning☆1,543Updated 3 months ago
- 骆驼:A Chinese finetuned instruction LLaMA. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技☆721Updated 2 years ago
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,750Updated last year
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,044Updated last year
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆1,007Updated last year
- ☆459Updated last year
- Chinese-LLaMA 1&2、Chinese-Falcon 基础模型;ChatFlow中文对话模型;中文OpenLLaMA模型;NLP预训练/指令微调数据集☆3,053Updated last year
- TextGen: Implementation of Text Generation models, include LLaMA, BLOOM, GPT2, BART, T5, SongNet and so on. 文本生成模型,实现了包括LLaMA,ChatGLM,BLO…☆964Updated 9 months ago
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,748Updated last year
- ⭐️ NLP Algorithms with transformers lib. Supporting Text-Classification, Text-Generation, Information-Extraction, Text-Matching, RLHF, SF…☆2,353Updated last year
- alpaca中文指令微调数据集☆392Updated 2 years ago
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,618Updated last year
- chatglm多gpu用deepspeed和☆409Updated 11 months ago
- 开源SFT数据集整理,随时补充☆522Updated 2 years ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆360Updated last year
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆399Updated last year
- Code for fintune ChatGLM-6b using low-rank adaptation (LoRA)☆720Updated last year
- pCLUE: 1000000+多任务提示学习数据集☆497Updated 2 years ago
- 基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等☆2,749Updated last year
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆409Updated last year
- PromptCLUE, 全中文任务支持零样本学习模型☆662Updated 2 years ago
- Repo for adapting Meta LlaMA2 in Chinese! META最新发布的LlaMA2的汉化版! (完全开源可商用)☆743Updated last year
- 探索中文instruct数据在ChatGLM, LLaMA上的微调表现☆390Updated 2 years ago
- Implementation of Chinese ChatGPT☆285Updated last year
- Efficient Inference for Big Models☆584Updated 2 years ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆769Updated 6 months ago
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆261Updated 6 months ago