CarperAI / OpenELMLinks
Evolution Through Large Models
β733Updated last year
Alternatives and similar repositories for OpenELM
Users that are interested in OpenELM are comparing it to the libraries listed below
Sorting:
- β416Updated last year
- Code for Parsel π - generate complex programs with language modelsβ432Updated 2 years ago
- Reflexion: an autonomous agent with dynamic memory and self-reflectionβ388Updated last year
- β866Updated last year
- Inference code for Persimmon-8Bβ414Updated 2 years ago
- A repository for research on medium sized language models.β509Updated 3 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diveβ¦β964Updated 11 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agentsβ556Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.β826Updated last year
- β546Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRAβ628Updated last year
- This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and benchβ¦β595Updated last year
- Dromedary: towards helpful, ethical and reliable LLMs.β1,146Updated last week
- Used for adaptive human in the loop evaluation of language and embedding models.β311Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining dataβ505Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAIβ1,402Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.β771Updated 11 months ago
- Code for Quiet-STaRβ740Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such asβ¦β353Updated 2 years ago
- Convolutions for Sequence Modelingβ898Updated last year
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate β¦β635Updated 2 years ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ721Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"β560Updated 8 months ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascriptβ601Updated last year
- PaL: Program-Aided Language Models (ICML 2023)β510Updated 2 years ago
- Salesforce open-source LLMs with 8k sequence length.β723Updated 7 months ago
- Language Modeling with the H3 State Space Modelβ518Updated last year
- This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (Neurβ¦β544Updated 8 months ago
- Ask Me Anything language model promptingβ547Updated 2 years ago
- Minimal library to train LLMs on TPU in JAX with pjit().β301Updated last year