berlino / seq_icl
โ51Updated 11 months ago
Alternatives and similar repositories for seq_icl:
Users that are interested in seq_icl are comparing it to the libraries listed below
- โ46Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.โ65Updated 8 months ago
- A MAD laboratory to improve AI architecture designs ๐งชโ111Updated 4 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"โ71Updated 5 months ago
- Minimal but scalable implementation of large language models in JAXโ34Updated 5 months ago
- Stick-breaking attentionโ52Updated last month
- Understand and test language model architectures on synthetic tasks.โ192Updated last month
- Language models scale reliably with over-training and on downstream tasksโ96Updated last year
- โ37Updated last year
- This repo is based on https://github.com/jiaweizzhao/GaLoreโ26Updated 7 months ago
- โ31Updated last year
- โ77Updated 8 months ago
- nanoGPT-like codebase for LLM trainingโ94Updated 3 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ105Updated 5 months ago
- โ52Updated 6 months ago
- A toolkit for scaling law research โโ49Updated 2 months ago
- โ53Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.โ59Updated 2 months ago
- โ25Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Modelsโ45Updated 3 weeks ago
- A library for efficient patching and automatic circuit discovery.โ62Updated 2 months ago
- โ45Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amountโฆโ53Updated last year
- Universal Neurons in GPT2 Language Modelsโ27Updated 10 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden Stateโ18Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023โ135Updated 11 months ago
- Experiments on the impact of depth in transformers and SSMs.โ25Updated 5 months ago
- โ79Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)โ73Updated last year
- โ53Updated 9 months ago