masa3141 / japanese-alpaca-lora
A japanese finetuned instruction LLaMA
☆126Updated 2 years ago
Alternatives and similar repositories for japanese-alpaca-lora:
Users that are interested in japanese-alpaca-lora are comparing it to the libraries listed below
- ☆142Updated 2 years ago
- LLM構築用の日本語チャットデータセット☆83Updated last year
- Japanese LLaMa experiment☆53Updated 4 months ago
- A framework for few-shot evaluation of autoregressive language models.☆151Updated 7 months ago
- ☆42Updated last year
- ☆60Updated 10 months ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated 2 years ago
- ☆38Updated last year
- 📖 — Notebooks related to RWKV☆59Updated last year
- deep learning☆149Updated last month
- Instruct-tune LLaMA on consumer hardware☆74Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆414Updated last year
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆173Updated last year
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆85Updated 2 years ago
- ☆83Updated last year
- The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective☆62Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆135Updated 4 months ago
- Multi-language Enhanced LLaMA☆301Updated 2 years ago
- ☆82Updated 11 months ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- ChatGLM-6B fine-tuning.☆135Updated 2 years ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆86Updated last year
- A more efficient GLM implementation!☆55Updated 2 years ago
- moss chat finetuning☆50Updated last year
- llama inference for tencentpretrain☆98Updated last year
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated 2 years ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆24Updated 2 years ago
- Moss Vortex is a lightweight and high-performance deployment and inference backend engineered specifically for MOSS 003, providing a weal…☆37Updated 2 years ago
- ☆124Updated last year