apple / ml-selfcondLinks
Self-Conditioning Pre-Trained Language Models, ICML 2022
β33Updated 3 years ago
Alternatives and similar repositories for ml-selfcond
Users that are interested in ml-selfcond are comparing it to the libraries listed below
Sorting:
- β42Updated 2 years ago
- **ARCHIVED** Filesystem interface to π€ Hubβ58Updated 2 years ago
- Whispering Experts: Neural Interventions for Toxicity Mitigation in Language Models, ICML 2024β22Updated last year
- Repo for "Smart Word Suggestions" (SWS) task and benchmarkβ20Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found heβ¦β31Updated 2 years ago
- OSLO: Open Source for Large-scale Optimizationβ174Updated 2 years ago
- β14Updated 3 years ago
- The package used to build the documentation of our Hugging Face reposβ132Updated this week
- SILO Language Models code repositoryβ83Updated last year
- Developing tools to automatically analyze datasetsβ75Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β96Updated 2 years ago
- URL downloader supporting checkpointing and continuous checksumming.β19Updated last year
- [ACL 2023] Gradient Ascent Post-training Enhances Language Model Generalizationβ28Updated last year
- some common Huggingface transformers in maximal update parametrization (Β΅P)β86Updated 3 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!β113Updated 2 years ago
- An instruction-based benchmark for text improvements.β143Updated 2 years ago
- Experiments for efforts to train a new and improved t5β75Updated last year
- β19Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β35Updated 2 years ago
- MEXMA: Token-level objectives improve sentence representationsβ42Updated 9 months ago
- β44Updated 11 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Modelβ43Updated last month
- Calculating Expected Time for training LLM.β38Updated 2 years ago
- We release the UICaption dataset. The dataset consists of UI images (icons and screenshots) and associated text descriptions. This dataseβ¦β41Updated 2 years ago
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language promptsβ¦β94Updated last year
- Anh - LAION's multilingual assistant datasets and modelsβ27Updated 2 years ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" givenβ¦β14Updated 2 years ago
- β76Updated last year
- β149Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.β27Updated 2 years ago