EleutherAGI / summarisation
The Intermediate Goal of the project is to train a GPT like architecture to learn to summarise reddit posts from human preferences, as this has been done by OpenAI and provides a good benchmark to compare against. We will use this intermediate step as a way to lay the groundwork needed for on the fly learning using implicit models.
☆12Updated 3 years ago
Alternatives and similar repositories for summarisation:
Users that are interested in summarisation are comparing it to the libraries listed below
- Official code for the paper "Context-Aware Language Modeling for Goal-Oriented Dialogue Systems"☆34Updated 2 years ago
- A framework for implementing equivariant DL☆10Updated 3 years ago
- Few-shot Learning with Auxiliary Data☆27Updated last year
- The InterScript dataset contains interactive user feedback on scripts generated by a T5-XXL model.☆11Updated 3 years ago
- Official code for the paper: "Metadata Archaeology"☆19Updated last year
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 2 years ago
- Usable implementation of Emerging Symbol Binding Network (ESBN), in Pytorch☆24Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆48Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 11 months ago
- ☆23Updated 7 months ago
- Documentation for dynamic machine learning systems.☆29Updated 7 months ago
- ☆12Updated 3 years ago
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- Exploring Few-Shot Adaptation of Language Models with Tables☆23Updated 2 years ago
- ☆15Updated 2 years ago
- ☆32Updated 3 years ago
- SMASHED is a toolkit designed to apply transformations to samples in datasets, such as fields extraction, tokenization, prompting, batchi…☆33Updated 11 months ago
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆15Updated 3 years ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 3 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Implementation of Metaformer, but in an autoregressive manner☆24Updated 2 years ago
- ☆23Updated 3 years ago
- ☆20Updated 3 years ago
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 3 years ago
- This repo contains code for the paper "Psychologically-informed chain-of-thought prompts for metaphor understanding in large language mod…☆14Updated last year
- Minimum Description Length probing for neural network representations☆19Updated 2 months ago
- Agents that build knowledge graphs and explore textual worlds by asking questions☆79Updated last year
- ☆40Updated 7 months ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆58Updated last year