jiaohuix / ppllamaLinks
The paddle implementation of meta's LLaMA.
☆45Updated 2 years ago
Alternatives and similar repositories for ppllama
Users that are interested in ppllama are comparing it to the libraries listed below
Sorting:
- ChatGLM-6B-Slim:裁减掉20K图片Token的ChatGLM-6B,完全一样的性能,占用更小的显存。☆127Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- deep learning☆149Updated 3 months ago
- minichatgpt - To Train ChatGPT In 5 Minutes☆169Updated 2 years ago
- Another ChatGLM2 implementation for GPTQ quantization☆55Updated last year
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆117Updated 2 years ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- ☆124Updated last year
- llama inference for tencentpretrain☆99Updated 2 years ago
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆173Updated last year
- 用于微调LLM的中文指令数据集☆28Updated 2 years ago
- ☆81Updated last year
- zero零训练llm调参☆32Updated 2 years ago
- Open efforts to implement ChatGPT-like models and beyond.☆109Updated last year
- A more efficient GLM implementation!☆54Updated 2 years ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated 2 years ago
- The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective☆62Updated 2 years ago
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆97Updated last year
- LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers☆51Updated 2 years ago
- Langport is a language model inference service☆94Updated 11 months ago
- moss chat finetuning☆51Updated last year
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated 2 years ago
- GTS Engine: A powerful NLU Training System。GTS引擎(GTS-Engine)是一款开 箱即用且性能强大的自然语言理解引擎,聚焦于小样本任务,能够仅用小样本就能自动化生产NLP模型。☆91Updated 2 years ago
- 骆驼QA,中文大语言阅读理解模型。☆75Updated 2 years ago
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆84Updated 2 years ago
- ⚡ boost inference speed of GPT models in transformers by onnxruntime☆53Updated 2 years ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆48Updated 2 years ago
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆134Updated 7 months ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆412Updated 2 years ago
- ggml implementation of the baichuan13b model (adapted from llama.cpp)☆55Updated 2 years ago