speed1313 / jax-llmLinks
JAX implementation of Large Language Models. You can train GPT-2-like model with 青空文庫 (aozora bunko-clean dataset) or any other text dataset.
☆13Updated last year
Alternatives and similar repositories for jax-llm
Users that are interested in jax-llm are comparing it to the libraries listed below
Sorting:
- Easily turn large English text datasets into Japanese text datasets using open LLMs.☆22Updated 8 months ago
- ☆27Updated 11 months ago
- JMultiWOZ: A Large-Scale Japanese Multi-Domain Task-Oriented Dialogue Dataset, LREC-COLING 2024☆25Updated last year
- Preferred Generation Benchmark☆85Updated last month
- Japanese-BPEEncoder☆41Updated 4 years ago
- The evaluation scripts of JMTEB (Japanese Massive Text Embedding Benchmark)☆74Updated last week
- Swallowプロジェクト 事後学習済み大規模言語モデル 評価フレームワーク☆21Updated last month
- 敬語変換タスクにおける評価用データセット☆21Updated 2 years ago
- ☆34Updated 5 years ago
- ☆51Updated 2 years ago
- ☆19Updated last year
- NLP 100 Exercise 2025☆32Updated 6 months ago
- Training and evaluation scripts for JGLUE, a Japanese language understanding benchmark☆17Updated this week
- Mixtral-based Ja-En (En-Ja) Translation model☆19Updated 9 months ago
- Japanese Language Model Financial Evaluation Harness☆75Updated 4 months ago
- ☆138Updated last week
- ☆49Updated 9 months ago
- Japanese instruction data (日本語指示データ)☆24Updated 2 years ago
- ☆25Updated 4 months ago
- ☆86Updated 2 years ago
- Exploring Japanese SimCSE☆69Updated last year
- 🛥 Vaporetto is a fast and lightweight pointwise prediction based tokenizer. This is a Python wrapper for Vaporetto.☆20Updated 4 months ago
- This repository has implementations of data augmentation for NLP for Japanese.☆64Updated 2 years ago
- ☆33Updated last year
- Japanese Realistic Textual Entailment Corpus (NLP 2020, LREC 2020)☆77Updated 2 years ago
- 【2024年版】BERTによるテキスト分類☆29Updated last year
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆114Updated this week
- ☆62Updated last year
- The robust text processing pipeline framework enabling customizable, efficient, and metric-logged text preprocessing.☆123Updated last month
- DistilBERT model pre-trained on 131 GB of Japanese web text. The teacher model is BERT-base that built in-house at LINE.☆45Updated 2 years ago