bobazooba / xllmLinks
🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
☆403Updated last year
Alternatives and similar repositories for xllm
Users that are interested in xllm are comparing it to the libraries listed below
Sorting:
- ☆461Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆284Updated 5 months ago
- Domain Adapted Language Modeling Toolkit - E2E RAG☆327Updated 9 months ago
- The repository for the code of the UltraFastBERT paper☆516Updated last year
- Tune any FALCON in 4-bit☆465Updated last year
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆240Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 9 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆703Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆137Updated last year
- ☆199Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆268Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 10 months ago
- Automatically evaluate your LLMs in Google Colab☆649Updated last year
- LLM Workshop by Sourab Mangrulkar☆388Updated last year
- Fast & more realistic evaluation of chat language models. Includes leaderboard.☆187Updated last year
- Best practices for distilling large language models.☆569Updated last year
- batched loras☆344Updated last year
- experiments with inference on llama☆104Updated last year
- Let's build better datasets, together!☆260Updated 7 months ago
- Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning☆312Updated 9 months ago
- An open collection of implementation tips, tricks and resources for training large language models☆478Updated 2 years ago
- Fine-Tuning Embedding for RAG with Synthetic Data☆506Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- 📚 Datasets and models for instruction-tuning☆238Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆597Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆125Updated last year
- An open collection of methodologies to help with successful training of large language models.☆507Updated last year
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆332Updated last month
- Repo for the Belebele dataset, a massively multilingual reading comprehension dataset.☆335Updated 7 months ago