Cohere-Labs-Community / AI-Alignment-CohortLinks
☆28Updated 10 months ago
Alternatives and similar repositories for AI-Alignment-Cohort
Users that are interested in AI-Alignment-Cohort are comparing it to the libraries listed below
Sorting:
- Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.☆31Updated 4 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆191Updated 2 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 11 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- ☆65Updated 10 months ago
- ☆43Updated 3 months ago
- Fast bare-bones BPE for modern tokenizer training☆164Updated 2 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆125Updated last year
- ☆98Updated 3 weeks ago
- A puzzle to learn about prompting☆132Updated 2 years ago
- ☆141Updated last week
- This repository's goal is to precompile all past presentations of the Huggingface reading group☆48Updated last year
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 5 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Open source interpretability artefacts for R1.☆157Updated 4 months ago
- 🧠 Starter templates for doing interpretability research☆73Updated 2 years ago
- An extension of the nanoGPT repository for training small MOE models.☆181Updated 5 months ago
- Website☆54Updated 2 years ago
- code for training & evaluating Contextual Document Embedding models☆197Updated 3 months ago
- Prune transformer layers☆69Updated last year
- rl from zero pretrain, can it be done? yes.☆261Updated last week
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆66Updated 3 months ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆48Updated last year
- Building GPT ...☆18Updated 9 months ago
- An introduction to LLM Sampling☆79Updated 8 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 3 months ago
- ☆384Updated this week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆153Updated 2 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 10 months ago