SakanaAI / TAIDLinks
Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"
☆106Updated 4 months ago
Alternatives and similar repositories for TAID
Users that are interested in TAID are comparing it to the libraries listed below
Sorting:
- ☆47Updated 6 months ago
- Preferred Generation Benchmark☆82Updated last month
- Japanese LLaMa experiment☆53Updated 6 months ago
- ☆26Updated 7 months ago
- ☆60Updated last year
- 【2024年版】BERTによるテキスト分類☆29Updated 11 months ago
- Swallowプロジェクト 大規模言語モデル 評価スクリプト☆17Updated 2 months ago
- Easily turn large English text datasets into Japanese text datasets using open LLMs.☆20Updated 5 months ago
- ☆135Updated last week
- ☆15Updated 9 months ago
- CycleQD is a framework for parameter space model merging.☆40Updated 4 months ago
- ☆16Updated 9 months ago
- The robust text processing pipeline framework enabling customizable, efficient, and metric-logged text preprocessing.☆122Updated this week
- LLaVA-JP is a Japanese VLM trained by LLaVA method☆62Updated 11 months ago
- Browser-based chat UI for TinySwallow-1.5B that runs without API calls.☆117Updated 4 months ago
- Project of llm evaluation to Japanese tasks☆83Updated this week
- A lightweight framework for evaluating visual-language models.☆30Updated last week
- ☆51Updated last year
- Mixtral-based Ja-En (En-Ja) Translation model☆19Updated 5 months ago
- GPT-4 を用いて、言語モデルの応答を自動評価するスクリプト☆16Updated last year
- LLMとLoRAを用いたテキスト分類☆97Updated last year
- ☆22Updated 4 months ago
- ☆22Updated last year
- Python-based chat demo for TinySwallow-1.5B that works completely offline☆55Updated 4 months ago
- A framework for few-shot evaluation of autoregressive language models.☆153Updated 9 months ago
- Ongoing research training Mixture of Expert models.☆18Updated 9 months ago
- ☆23Updated last year
- [ICLR 2025] SDTT: a simple and effective distillation method for discrete diffusion models☆28Updated 2 months ago
- The evaluation scripts of JMTEB (Japanese Massive Text Embedding Benchmark)☆61Updated 2 months ago
- ☆84Updated last year