CarperAI / OpenELMLinks
Evolution Through Large Models
β735Updated 2 years ago
Alternatives and similar repositories for OpenELM
Users that are interested in OpenELM are comparing it to the libraries listed below
Sorting:
- β415Updated 2 years ago
- Code for Parsel π - generate complex programs with language modelsβ433Updated 2 years ago
- Language Modeling with the H3 State Space Modelβ519Updated 2 years ago
- β548Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.β308Updated 2 years ago
- Inference code for Persimmon-8Bβ412Updated 2 years ago
- Reflexion: an autonomous agent with dynamic memory and self-reflectionβ388Updated 2 years ago
- β1,057Updated last year
- Draw more samplesβ196Updated last year
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trickβ293Updated 2 years ago
- A repository for research on medium sized language models.β520Updated 5 months ago
- Convolutions for Sequence Modelingβ904Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascriptβ606Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"β561Updated 11 months ago
- Minimal library to train LLMs on TPU in JAX with pjit().β299Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diveβ¦β971Updated last year
- Finetuning Large Language Models on One Consumer GPU in 2 Bitsβ733Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wiβ¦β352Updated last year
- This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (Neurβ¦β556Updated 10 months ago
- β313Updated last year
- [ICLR 2024] Lemur: Open Foundation Models for Language Agentsβ555Updated 2 years ago
- Ask Me Anything language model promptingβ546Updated 2 years ago
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"β1,063Updated last year
- Tools for understanding how transformer predictions are built layer-by-layerβ549Updated 3 months ago
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate β¦β637Updated 2 years ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ732Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.β837Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.β775Updated last year
- Fast & Simple repository for pre-training and fine-tuning T5-style modelsβ1,014Updated last year
- [NeurIPS '23 Spotlight] Thought Cloning: Learning to Think while Acting by Imitating Human Thinkingβ269Updated last year