apple / ml-selfcondLinks
Self-Conditioning Pre-Trained Language Models, ICML 2022
☆34Updated 3 years ago
Alternatives and similar repositories for ml-selfcond
Users that are interested in ml-selfcond are comparing it to the libraries listed below
Sorting:
- Repo for "Smart Word Suggestions" (SWS) task and benchmark☆20Updated 2 years ago
- ☆42Updated 3 years ago
- SILO Language Models code repository☆83Updated last year
- Whispering Experts: Neural Interventions for Toxicity Mitigation in Language Models, ICML 2024☆25Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- ☆14Updated 3 months ago
- **ARCHIVED** Filesystem interface to 🤗 Hub☆59Updated 2 years ago
- ☆23Updated 3 years ago
- Experiments for efforts to train a new and improved t5☆76Updated last year
- [ACL 2023] Gradient Ascent Post-training Enhances Language Model Generalization☆29Updated last year
- ☆44Updated last year
- OSLO: Open Source for Large-scale Optimization☆175Updated 2 years ago
- Code and data from the paper 'Human Feedback is not Gold Standard'☆19Updated last year
- Anh - LAION's multilingual assistant datasets and models☆27Updated 2 years ago
- ☆77Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 7 months ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- ☆26Updated 2 years ago
- ☆57Updated 2 years ago
- Code for Zero-Shot Tokenizer Transfer☆142Updated last year
- ☆30Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 3 years ago
- Developing tools to automatically analyze datasets☆75Updated last year
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆95Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 4 months ago
- ☆57Updated last year
- An instruction-based benchmark for text improvements.☆142Updated 3 years ago
- ☆24Updated last month
- MEXMA: Token-level objectives improve sentence representations☆42Updated last year