HackerCupAI / starter-kitsLinks
☆67Updated last year
Alternatives and similar repositories for starter-kits
Users that are interested in starter-kits are comparing it to the libraries listed below
Sorting:
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated last year
- A competition to get you started on the NeurIPS AI Hackercup☆29Updated last year
- Seemless interface of using PyTOrch distributed with Jupyter notebooks☆51Updated last month
- ☆29Updated last year
- Fine-tune an LLM to perform batch inference and online serving.☆113Updated 5 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated 2 years ago
- ML/DL Math and Method notes☆64Updated last year
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆104Updated last month
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆126Updated 2 years ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆48Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆239Updated 8 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆195Updated last year
- ☆244Updated 8 months ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- A puzzle to learn about prompting☆135Updated 2 years ago
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- ☆31Updated 11 months ago
- Notebooks for fine tuning pali gemma☆117Updated 6 months ago
- Highly commented implementations of Transformers in PyTorch☆136Updated 2 years ago
- ☆124Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆194Updated 5 months ago
- An introduction to LLM Sampling☆79Updated 10 months ago
- An extension of the nanoGPT repository for training small MOE models.☆205Updated 7 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆109Updated 3 weeks ago
- Simple repository for training small reasoning models☆44Updated 8 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Collection of autoregressive model implementation☆86Updated 6 months ago
- LLM training in simple, raw C/CUDA☆15Updated 10 months ago
- ☆38Updated last year
- ☆106Updated last week