bobazooba / xllmLinks
🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
☆408Updated last year
Alternatives and similar repositories for xllm
Users that are interested in xllm are comparing it to the libraries listed below
Sorting:
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆292Updated 9 months ago
- Tune any FALCON in 4-bit☆465Updated 2 years ago
- The repository for the code of the UltraFastBERT paper☆520Updated last year
- Domain Adapted Language Modeling Toolkit - E2E RAG☆334Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆112Updated last year
- ☆468Updated last year
- Automatically evaluate your LLMs in Google Colab☆675Updated last year
- ☆198Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated last year
- Let's build better datasets, together!☆265Updated 11 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆277Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆733Updated last year
- ☆217Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆246Updated last year
- A bagel, with everything.☆325Updated last year
- Fast & more realistic evaluation of chat language models. Includes leaderboard.☆189Updated last year
- Easily embed, cluster and semantically label text datasets☆584Updated last year
- experiments with inference on llama☆103Updated last year
- Best practices for distilling large language models.☆592Updated last year
- awesome synthetic (text) datasets☆314Updated 3 weeks ago
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆375Updated 5 months ago
- 📚 Datasets and models for instruction-tuning☆238Updated 2 years ago
- batched loras☆347Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆631Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆196Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆507Updated 2 years ago
- LLM Workshop by Sourab Mangrulkar☆397Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆606Updated last year