allenai / fm-cheatsheetLinks
Website for hosting the Open Foundation Models Cheat Sheet.
☆268Updated 4 months ago
Alternatives and similar repositories for fm-cheatsheet
Users that are interested in fm-cheatsheet are comparing it to the libraries listed below
Sorting:
- Manage scalable open LLM inference endpoints in Slurm clusters☆271Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- A repository for research on medium sized language models.☆510Updated 3 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆310Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆231Updated 6 months ago
- ☆141Updated 3 weeks ago
- Scaling Data-Constrained Language Models☆341Updated 2 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆199Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆223Updated last year
- ☆267Updated 7 months ago
- The official evaluation suite and dynamic data release for MixEval.☆245Updated 10 months ago
- Extract full next-token probabilities via language model APIs☆247Updated last year
- Fast & more realistic evaluation of chat language models. Includes leaderboard.☆188Updated last year
- Fast bare-bones BPE for modern tokenizer training☆164Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆157Updated 2 months ago
- Pre-training code for Amber 7B LLM☆167Updated last year
- Let's build better datasets, together!☆263Updated 8 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆211Updated last week
- PyTorch building blocks for the OLMo ecosystem☆286Updated this week
- code for training & evaluating Contextual Document Embedding models☆197Updated 3 months ago
- A puzzle to learn about prompting☆133Updated 2 years ago
- batched loras☆345Updated 2 years ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆64Updated 10 months ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆285Updated 6 months ago
- Understand and test language model architectures on synthetic tasks.☆224Updated last month
- experiments with inference on llama☆104Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 10 months ago
- git extension for {collaborative, communal, continual} model development☆216Updated 9 months ago
- ☆94Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆229Updated last month