masa3141 / japanese-alpaca-lora
A japanese finetuned instruction LLaMA
☆126Updated 2 years ago
Alternatives and similar repositories for japanese-alpaca-lora:
Users that are interested in japanese-alpaca-lora are comparing it to the libraries listed below
- LLM構築用の日本語チャットデータセット☆81Updated last year
- ☆142Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆149Updated 6 months ago
- Japanese LLaMa experiment☆53Updated 3 months ago
- ☆42Updated last year
- ☆59Updated 9 months ago
- ☆83Updated last year
- ☆39Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆414Updated last year
- deep learning☆150Updated 2 weeks ago
- LLMとLoRAを用いたテキスト分類☆97Updated last year
- ☆124Updated last year
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆85Updated last year
- The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective☆62Updated 2 years ago
- The robust text processing pipeline framework enabling customizable, efficient, and metric-logged text preprocessing.☆121Updated 4 months ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated last year
- Utility scripts for preprocessing Wikipedia texts for NLP☆76Updated 11 months ago
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆173Updated last year
- 📖 — Notebooks related to RWKV☆59Updated last year
- llama inference for tencentpretrain☆98Updated last year
- ChatGLM-6B fine-tuning.☆135Updated last year
- Instruct-tune LLaMA on consumer hardware☆73Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated 2 years ago
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated last year
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆24Updated 2 years ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- moss chat finetuning☆50Updated 11 months ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆116Updated last year