jiaohuix / ppllama
The paddle implementation of meta's LLaMA.
☆44Updated last year
Alternatives and similar repositories for ppllama:
Users that are interested in ppllama are comparing it to the libraries listed below
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- A more efficient GLM implementation!☆55Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆130Updated 6 months ago
- llama inference for tencentpretrain☆96Updated last year
- 全球首个StableVicuna中文优化版。☆65Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated last year
- 演示 vllm 对中文大语言模型的神奇效果☆31Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆89Updated last year
- 实现Blip2RWKV+QFormer的多模态图文对话大模型,使用Two-Step Cognitive Psychology Prompt方法,仅3B参数的模型便能够出现类人因果思维链。对标MiniGPT-4,ImageBind等图文对话大语言模型,力求以更小的算力和资源实…☆37Updated last year
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆50Updated last year
- deep learning☆150Updated 6 months ago
- ChatGLM-6B-Slim:裁减掉20K图片Token的ChatGLM-6B,完全一样的性能,占用更小的显存。☆126Updated last year
- moss chat finetuning☆50Updated 8 months ago
- CodeLLaMA 中文版 - 代码生成助手,huggingface累积下载2w+次☆45Updated last year
- zero零训练llm调参☆31Updated last year
- rwkv finetuning☆36Updated 8 months ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- ✅4g GPU可用 | 简易实现ChatGLM单机调用多个计算设备(GPU、CPU)进行推理☆34Updated last year
- A unified tokenization tool for Images, Chinese and English.☆151Updated last year
- Yuren 13B is an information synthesis large language model that has been continuously trained based on Llama 2 13B, which builds upon the…☆14Updated last year
- 问题生成器,无限追问☆11Updated last year
- 基于baichuan-7b的开源多模态大语言模型☆73Updated last year
- A converter and basic tester for rwkv onnx☆42Updated 11 months ago
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆56Updated 2 months ago
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆117Updated last year
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆96Updated 8 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆127Updated last month
- Instruct-tune LLaMA on consumer hardware☆73Updated last year