allenai / fm-cheatsheetLinks
Website for hosting the Open Foundation Models Cheat Sheet.
☆267Updated 6 months ago
Alternatives and similar repositories for fm-cheatsheet
Users that are interested in fm-cheatsheet are comparing it to the libraries listed below
Sorting:
- Manage scalable open LLM inference endpoints in Slurm clusters☆274Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated 2 years ago
- ☆142Updated 2 months ago
- A repository for research on medium sized language models.☆518Updated 5 months ago
- A puzzle to learn about prompting☆135Updated 2 years ago
- Scaling Data-Constrained Language Models☆342Updated 4 months ago
- RuLES: a benchmark for evaluating rule-following in language models☆238Updated 8 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆311Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆224Updated last month
- Multipack distributed sampler for fast padding-free training of LLMs☆201Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆252Updated last year
- code for training & evaluating Contextual Document Embedding models☆200Updated 5 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆215Updated 2 months ago
- Let's build better datasets, together!☆264Updated 10 months ago
- Extract full next-token probabilities via language model APIs☆247Updated last year
- PyTorch building blocks for the OLMo ecosystem☆317Updated this week
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆290Updated 8 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆218Updated last week
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆64Updated last month
- ☆256Updated 7 months ago
- A comprehensive deep dive into the world of tokens☆226Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆171Updated 4 months ago
- Fast bare-bones BPE for modern tokenizer training☆168Updated 4 months ago
- ☆268Updated 9 months ago
- ☆138Updated 2 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆193Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated 2 years ago
- batched loras☆347Updated 2 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆132Updated 10 months ago
- ☆94Updated 2 years ago