google-research / cascadesLinks
Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference, and more.
☆216Updated 7 months ago
Alternatives and similar repositories for cascades
Users that are interested in cascades are comparing it to the libraries listed below
Sorting:
- A domain-specific probabilistic programming language for modeling and inference with language models☆140Updated 8 months ago
- Probabilistic LLM evaluations. [CogSci2023; ACL2023]☆73Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- A repository for transformer critique learning and generation☆89Updated 2 years ago
- git extension for {collaborative, communal, continual} model development☆217Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.☆308Updated 2 years ago
- A library to create and manage configuration files, especially for machine learning projects.☆79Updated 3 years ago
- Train very large language models in Jax.☆210Updated 2 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆197Updated 2 years ago
- ☆67Updated 3 years ago
- For experiments involving instruct gpt. Currently used for documenting open research questions.☆71Updated 3 years ago
- Neural Networks and the Chomsky Hierarchy☆213Updated last year
- ☆66Updated 3 years ago
- Extract full next-token probabilities via language model APIs☆248Updated last year
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.☆180Updated 3 years ago
- ☆159Updated 2 years ago
- Utilities for the HuggingFace transformers library☆73Updated 2 years ago
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆66Updated 3 years ago
- ☆319Updated last year
- Inference code for LLaMA models in JAX☆120Updated last year
- ☆95Updated last year
- Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022)☆133Updated last month
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆210Updated 2 years ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆216Updated 2 weeks ago
- Language-annotated Abstraction and Reasoning Corpus☆98Updated 2 years ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Updated 2 years ago
- ☆214Updated 2 years ago
- ☆167Updated 2 years ago
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago