CarperAI / OpenELMLinks
Evolution Through Large Models
☆737Updated 2 years ago
Alternatives and similar repositories for OpenELM
Users that are interested in OpenELM are comparing it to the libraries listed below
Sorting:
- ☆416Updated 2 years ago
- Code for Parsel 🐍 - generate complex programs with language models☆439Updated 2 years ago
- Inference code for Persimmon-8B☆412Updated 2 years ago
- ☆866Updated 2 years ago
- Convolutions for Sequence Modeling☆910Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆840Updated last year
- Language Modeling with the H3 State Space Model☆522Updated 2 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆308Updated 2 years ago
- ☆551Updated last year
- Reflexion: an autonomous agent with dynamic memory and self-reflection☆388Updated 2 years ago
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆293Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆630Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆972Updated last year
- A repository for research on medium sized language models.☆526Updated 7 months ago
- Salesforce open-source LLMs with 8k sequence length.☆722Updated 11 months ago
- Ask Me Anything language model prompting☆546Updated 2 years ago
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,064Updated last year
- ☆1,064Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆723Updated 2 years ago
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆776Updated last year
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆734Updated last year
- Ongoing research training transformer models at scale☆395Updated last year
- ☆380Updated 2 years ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,144Updated 4 months ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆421Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆615Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆823Updated 2 years ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆737Updated last year
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate …☆638Updated 2 years ago
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆938Updated last year