bobazooba / xllmLinks
π¦ XβLLM: Cutting Edge & Easy LLM Finetuning
β402Updated last year
Alternatives and similar repositories for xllm
Users that are interested in xllm are comparing it to the libraries listed below
Sorting:
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for freeβ231Updated 7 months ago
- A bagel, with everything.β321Updated last year
- β520Updated 7 months ago
- πΉοΈ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.β137Updated 11 months ago
- Tune any FALCON in 4-bitβ467Updated last year
- experiments with inference on llamaβ104Updated last year
- The repository for the code of the UltraFastBERT paperβ516Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ699Updated last year
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)β240Updated last year
- FastFit β‘ When LLMs are Unfit Use FastFit β‘ Fast and Effective Text Classification with Many Classesβ207Updated last month
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytesβ¦β147Updated last year
- Let's build better datasets, together!β260Updated 6 months ago
- β455Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)β837Updated last week
- β203Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 linesβ197Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'β239Updated last year
- An open collection of implementation tips, tricks and resources for training large language modelsβ475Updated 2 years ago
- Late Interaction Models Training & Retrievalβ452Updated 2 weeks ago
- β199Updated last year
- Easily embed, cluster and semantically label text datasetsβ552Updated last year
- Automatically evaluate your LLMs in Google Colabβ643Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updatesβ456Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creationβ110Updated 9 months ago
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any languageβ310Updated last year
- π€ A PyTorch library of curated Transformer models and their composable componentsβ891Updated last year
- Best practices for distilling large language models.β554Updated last year
- β124Updated 2 months ago
- π Datasets and models for instruction-tuningβ238Updated last year
- Fast & more realistic evaluation of chat language models. Includes leaderboard.β187Updated last year