pacman100 / LLM-WorkshopLinks
LLM Workshop by Sourab Mangrulkar
☆395Updated last year
Alternatives and similar repositories for LLM-Workshop
Users that are interested in LLM-Workshop are comparing it to the libraries listed below
Sorting:
- A set of scripts and notebooks on LLM finetunning and dataset creation☆111Updated last year
- Automatically evaluate your LLMs in Google Colab☆667Updated last year
- Official repository for ORPO☆465Updated last year
- Best practices for distilling large language models.☆587Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆128Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆506Updated 2 years ago
- awesome synthetic (text) datasets☆305Updated this week
- Manage scalable open LLM inference endpoints in Slurm clusters☆276Updated last year
- A bagel, with everything.☆324Updated last year
- An open collection of methodologies to help with successful training of large language models.☆539Updated last year
- ☆552Updated 11 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated 2 years ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆290Updated 8 months ago
- Let's build better datasets, together!☆264Updated 10 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆495Updated last year
- batched loras☆348Updated 2 years ago
- A comprehensive deep dive into the world of tokens☆227Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆252Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆729Updated last year
- An open collection of implementation tips, tricks and resources for training large language models☆488Updated 2 years ago
- ☆466Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆477Updated last year
- Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI m…☆224Updated 2 years ago
- Llama from scratch, or How to implement a paper without crying☆580Updated last year
- Easily embed, cluster and semantically label text datasets☆584Updated last year
- LoRA and DoRA from Scratch Implementations☆214Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆312Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆242Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆268Updated 6 months ago