eric-prog / GPU-GrantsLinks
GPUGrants - a list of GPU grants that I can think of
☆66Updated 4 months ago
Alternatives and similar repositories for GPU-Grants
Users that are interested in GPU-Grants are comparing it to the libraries listed below
Sorting:
- Code for studying the super weight in LLM☆121Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆188Updated 2 months ago
- LLM-Merging: Building LLMs Efficiently through Merging☆209Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆186Updated 3 weeks ago
- An extension of the nanoGPT repository for training small MOE models.☆236Updated 11 months ago
- ☆83Updated 11 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆150Updated 4 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- nanoGPT-like codebase for LLM training☆113Updated 3 months ago
- ☆38Updated 11 months ago
- Prune transformer layers☆74Updated last year
- ⏰ AI conference deadline countdowns☆322Updated last week
- Course Materials for Interpretability of Large Language Models (0368.4264) at Tel Aviv University☆297Updated 3 weeks ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- List of AI Internships☆131Updated 2 years ago
- Tutorials for Triton, a language for writing gpu kernels☆73Updated 2 years ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- Open source interpretability artefacts for R1.☆170Updated 9 months ago
- ☆29Updated last year
- This repository collects all relevant resources about interpretability in LLMs☆391Updated last year
- Understand and test language model architectures on synthetic tasks.☆252Updated 3 weeks ago
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆160Updated last year
- ☆46Updated 8 months ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- 🧠 Starter templates for doing interpretability research☆76Updated 2 years ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆241Updated 2 weeks ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆198Updated 8 months ago
- Fast bare-bones BPE for modern tokenizer training☆175Updated 7 months ago
- Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.☆31Updated 9 months ago
- A brief and partial summary of RLHF algorithms.☆144Updated 11 months ago