jiaohuix / ppllama
The paddle implementation of meta's LLaMA.
☆44Updated last year
Related projects ⓘ
Alternatives and complementary repositories for ppllama
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- ChatGLM-6B-Slim:裁减掉20K图片Token的ChatGLM-6B,完全一样的性能,占用更小的显存。☆126Updated last year
- 全球首个StableVicuna中文优化版。☆65Updated last year
- llama inference for tencentpretrain☆96Updated last year
- deep learning☆149Updated 5 months ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆89Updated last year
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆117Updated last year
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆97Updated 7 months ago
- moss chat finetuning☆50Updated 7 months ago
- A more efficient GLM implementation!☆55Updated last year
- rwkv finetuning☆36Updated 7 months ago
- ☆81Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆130Updated 5 months ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆114Updated last year
- 大语言模型训练和服务调研☆34Updated last year
- Imitate OpenAI with Local Models☆85Updated 2 months ago
- zero零训练llm调参☆30Updated last year
- SUS-Chat: Instruction tuning done right☆47Updated 10 months ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆132Updated 7 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆62Updated last year
- ☆92Updated 6 months ago
- Evaluation for AI apps and agent☆35Updated 10 months ago
- ☆105Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆50Updated last year
- A prompt set of ChatGLM-6B☆14Updated last year
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆86Updated last year
- Its an open source LLM based on MOE Structure.☆57Updated 4 months ago
- LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers☆50Updated last year