masa3141 / japanese-alpaca-loraLinks
A japanese finetuned instruction LLaMA
☆128Updated 2 years ago
Alternatives and similar repositories for japanese-alpaca-lora
Users that are interested in japanese-alpaca-lora are comparing it to the libraries listed below
Sorting:
- LLM構築用の日本語チャットデータセット☆86Updated last year
- ☆141Updated 2 years ago
- Japanese LLaMa experiment☆54Updated last week
- ☆42Updated last year
- ☆62Updated last year
- ☆40Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆412Updated 2 years ago
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆184Updated 2 years ago
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆84Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆154Updated last year
- ☆81Updated last year
- 📖 — Notebooks related to RWKV☆58Updated 2 years ago
- The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective☆62Updated 2 years ago
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆171Updated last year
- Multi-language Enhanced LLaMA☆303Updated 2 years ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated 2 years ago
- ☆123Updated last year
- The paddle implementation of meta's LLaMA.☆45Updated 2 years ago
- Open efforts to implement ChatGPT-like models and beyond.☆107Updated last year
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated 2 years ago
- deep learning☆148Updated 5 months ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- ☆41Updated last year
- Instruct-tune LLaMA on consumer hardware☆73Updated 2 years ago
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆116Updated 2 years ago
- minichatgpt - To Train ChatGPT In 5 Minutes☆169Updated 2 years ago
- ChatGLM-6B fine-tuning.☆136Updated 2 years ago
- ☆86Updated 2 years ago
- moss chat finetuning☆51Updated last year
- LLMとLoRAを用いたテキスト分類☆97Updated 2 years ago