microsoft / deep-language-networksLinks
We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts at each layer. We stack two such layers, feeding the output of one layer to the next. We call the stacked architecture a Deep Language Network - DLN
☆95Updated last year
Alternatives and similar repositories for deep-language-networks
Users that are interested in deep-language-networks are comparing it to the libraries listed below
Sorting:
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- SILO Language Models code repository☆83Updated last year
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆95Updated 2 years ago
- ☆150Updated last year
- ☆44Updated last year
- RL algorithm: Advantage induced policy alignment☆66Updated 2 years ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 2 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated last year
- This repository includes code for the paper "Does Localization Inform Editing? Surprising Differences in Where Knowledge Is Stored vs. Ca…☆60Updated 2 years ago
- A repository for transformer critique learning and generation☆89Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 2 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆71Updated 2 years ago
- ☆76Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ☆80Updated 8 months ago
- Building modular LMs with parameter-efficient fine-tuning.☆114Updated last month
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- ☆56Updated 2 years ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆165Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆76Updated last year
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆76Updated last year
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆47Updated 2 years ago
- The repository contains code for Adaptive Data Optimization☆29Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆72Updated last year
- ☆45Updated 2 years ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 5 months ago
- Language Models of Code are Few-Shot Commonsense Learners (EMNLP 2022)☆86Updated 2 years ago
- ☆49Updated 2 years ago