HackerCupAI / starter-kitsLinks
☆68Updated last year
Alternatives and similar repositories for starter-kits
Users that are interested in starter-kits are comparing it to the libraries listed below
Sorting:
- Seemless interface of using PyTOrch distributed with Jupyter notebooks☆57Updated 3 months ago
- A competition to get you started on the NeurIPS AI Hackercup☆29Updated last year
- A collection of lightweight interpretability scripts to understand how LLMs think☆87Updated last week
- A set of scripts and notebooks on LLM finetunning and dataset creation☆113Updated last year
- ☆29Updated last year
- ML/DL Math and Method notes☆65Updated 2 years ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆129Updated 2 years ago
- Notebooks for fine tuning pali gemma☆117Updated 8 months ago
- Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.☆31Updated 8 months ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆112Updated last week
- Fine-tune an LLM to perform batch inference and online serving.☆115Updated 7 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆259Updated 2 years ago
- ☆51Updated 7 months ago
- ☆259Updated last month
- ☆19Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 7 months ago
- An introduction to LLM Sampling☆79Updated last year
- A puzzle to learn about prompting☆135Updated 2 years ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆196Updated last year
- ☆150Updated 4 months ago
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆297Updated last year
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆149Updated 3 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆125Updated 3 months ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆92Updated 2 years ago
- ☆45Updated 6 months ago
- Fine tune Gemma 3 on an object detection task☆95Updated 5 months ago
- ☆125Updated last year
- LLM training in simple, raw C/CUDA☆15Updated last year
- ☆38Updated last year