HackerCupAI / starter-kits
☆62Updated last month
Related projects ⓘ
Alternatives and complementary repositories for starter-kits
- A competition to get you started on the NeurIPS AI Hackercup☆27Updated last month
- A set of scripts and notebooks on LLM finetunning and dataset creation☆93Updated last month
- ☆20Updated last month
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆118Updated last year
- End-to-End LLM Guide☆97Updated 4 months ago
- My writings about ARC (Abstraction and Reasoning Corpus)☆59Updated last week
- RAGs: Simple implementations of Retrieval Augmented Generation (RAG) Systems☆83Updated 7 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆80Updated 11 months ago
- The history files when recording human interaction while solving ARC tasks☆95Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆229Updated 3 weeks ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆195Updated 6 months ago
- ☆29Updated 4 months ago
- ☆177Updated 3 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆221Updated 3 weeks ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆252Updated last year
- Slides, notes, and materials for the workshop☆306Updated 5 months ago
- Code for NeurIPS LLM Efficiency Challenge☆54Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆84Updated this week
- ☆40Updated 6 months ago
- Highly commented implementations of Transformers in PyTorch☆129Updated last year
- Our solution for the arc challenge 2024☆33Updated last week
- Website for hosting the Open Foundation Models Cheat Sheet.☆257Updated 4 months ago
- 🧠 Starter templates for doing interpretability research☆63Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆179Updated 5 months ago
- JAX Implementation of Black Forest Labs' Flux.1 family of models☆14Updated last month
- Presents comprehensive benchmarks of XLA-compatible pre-trained models in Keras.☆36Updated last year
- Draw more samples☆179Updated 5 months ago
- zero-to-lightning☆28Updated 6 months ago
- ☆101Updated 3 months ago
- Material for the series of seminars on Large Language Models☆24Updated 7 months ago