Ino-Ichan / GIT-LLMLinks
☆22Updated last year
Alternatives and similar repositories for GIT-LLM
Users that are interested in GIT-LLM are comparing it to the libraries listed below
Sorting:
- ☆16Updated 9 months ago
- LLaVA-JP is a Japanese VLM trained by LLaVA method☆62Updated 11 months ago
- ☆22Updated 4 months ago
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆42Updated 3 months ago
- ☆15Updated 9 months ago
- Japanese LLaMa experiment☆53Updated 6 months ago
- ☆60Updated last year
- Ongoing research training Mixture of Expert models.☆18Updated 9 months ago
- ☆174Updated last year
- A lightweight framework for evaluating visual-language models.☆30Updated last week
- ☆34Updated 2 months ago
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆106Updated 4 months ago
- Swallowプロジェクト 大規模言語モデル 評価スクリプト☆17Updated 2 months ago
- Mamba training library developed by kotoba technologies☆71Updated last year
- Mixtral-based Ja-En (En-Ja) Translation model☆19Updated 5 months ago
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆32Updated last year
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- GPT-4 を用いて、言語モデルの応答を自動評価するスクリプト☆16Updated last year
- ☆47Updated 6 months ago
- CycleQD is a framework for parameter space model merging.☆40Updated 4 months ago
- 【2024年版】BERTによるテキスト分類☆29Updated 11 months ago
- Unofficial entropix impl for Gemma2 and Llama and Qwen2 and Mistral☆17Updated 5 months ago
- ☆135Updated last week
- ☆11Updated last year
- Preferred Generation Benchmark☆82Updated last month
- ☆51Updated last year
- ☆23Updated last year
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆21Updated last year
- ☆26Updated 7 months ago
- [ICLR 2025] SDTT: a simple and effective distillation method for discrete diffusion models☆28Updated 2 months ago