iPieter / universal-distillation
π§ͺCreate domain-adapted language models by distilling from many pre-trained LMs
β10Updated 2 years ago
Alternatives and similar repositories for universal-distillation:
Users that are interested in universal-distillation are comparing it to the libraries listed below
- β15Updated 2 weeks ago
- β19Updated 2 weeks ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Faceβ31Updated last year
- Aioli: A unified optimization framework for language model data mixingβ23Updated 3 months ago
- Repository for Skill Set Optimizationβ12Updated 9 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Modelβ45Updated last year
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorchβ30Updated last week
- β23Updated 7 months ago
- This repo contains code for the paper "Psychologically-informed chain-of-thought prompts for metaphor understanding in large language modβ¦β14Updated last year
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our focβ¦β31Updated 10 months ago
- Byte-sized text games for code generation tasks on virtual environmentsβ19Updated 9 months ago
- Minimum Description Length probing for neural network representationsβ19Updated 2 months ago
- β14Updated 6 months ago
- [EACL 2023] CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verificationβ40Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response formatβ27Updated last year
- A Data Source for Reasoning Embodied Agentsβ19Updated last year
- Code and Dataset for Learning to Solve Complex Tasks by Talking to Agentsβ24Updated 2 years ago
- Embedding Recycling for Language modelsβ38Updated last year
- β23Updated 2 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignmentβ55Updated 7 months ago
- Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)β22Updated last year
- β25Updated 2 years ago
- SMASHED is a toolkit designed to apply transformations to samples in datasets, such as fields extraction, tokenization, prompting, batchiβ¦β33Updated 11 months ago
- Few-shot Learning with Auxiliary Dataβ27Updated last year
- List of papers on Self-Correction of LLMs.β72Updated 3 months ago
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuningβ34Updated last year
- β11Updated 4 months ago
- β14Updated 2 years ago
- The official implementation of "Distilling Relation Embeddings from Pre-trained Language Models, EMNLP 2021 main conference", a high-qualβ¦β46Updated 4 months ago
- Can LLMs generate code-mixed sentences through zero-shot prompting?β11Updated 2 years ago