CarperAI / OpenELMLinks
Evolution Through Large Models
β731Updated last year
Alternatives and similar repositories for OpenELM
Users that are interested in OpenELM are comparing it to the libraries listed below
Sorting:
- Code for Parsel π - generate complex programs with language modelsβ432Updated last year
- Inference code for Persimmon-8Bβ415Updated last year
- β416Updated last year
- A repository for research on medium sized language models.β509Updated 2 months ago
- Language Modeling with the H3 State Space Modelβ519Updated last year
- β544Updated last year
- Convolutions for Sequence Modelingβ895Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"β1,062Updated last year
- β1,031Updated last year
- β864Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.β311Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathwaysβ824Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRAβ628Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.β824Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascriptβ599Updated last year
- Salesforce open-source LLMs with 8k sequence length.β721Updated 7 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ710Updated last year
- An open-source implementation of Google's PaLM modelsβ822Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorchβ649Updated 8 months ago
- PaL: Program-Aided Language Models (ICML 2023)β505Updated 2 years ago
- Reflexion: an autonomous agent with dynamic memory and self-reflectionβ388Updated last year
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trickβ290Updated last year
- Ask Me Anything language model promptingβ546Updated 2 years ago
- Minimal library to train LLMs on TPU in JAX with pjit().β296Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such asβ¦β353Updated 2 years ago
- This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (Neurβ¦β541Updated 7 months ago
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate β¦β635Updated 2 years ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bitsβ730Updated last year
- Mamba-Chat: A chat LLM based on the state-space model architecture πβ928Updated last year
- Ongoing research training transformer models at scaleβ392Updated last year