microsoft / deep-language-networksLinks
We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts at each layer. We stack two such layers, feeding the output of one layer to the next. We call the stacked architecture a Deep Language Network - DLN
☆94Updated last year
Alternatives and similar repositories for deep-language-networks
Users that are interested in deep-language-networks are comparing it to the libraries listed below
Sorting:
- SILO Language Models code repository☆81Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- ☆149Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆57Updated last year
- ☆44Updated 8 months ago
- ☆29Updated last year
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- A repository for transformer critique learning and generation☆90Updated last year
- RL algorithm: Advantage induced policy alignment☆65Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆72Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 8 months ago
- This repository includes code for the paper "Does Localization Inform Editing? Surprising Differences in Where Knowledge Is Stored vs. Ca…☆61Updated 2 years ago
- This project studies the performance and robustness of language models and task-adaptation methods.☆150Updated last year
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆70Updated 2 years ago
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆46Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆162Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆48Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated last month
- ☆75Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Self-Alignment with Principle-Following Reward Models☆163Updated 3 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 11 months ago
- ☆69Updated 11 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆89Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 11 months ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago