elianap / divexplorerLinks
☆11Updated 3 years ago
Alternatives and similar repositories for divexplorer
Users that are interested in divexplorer are comparing it to the libraries listed below
Sorting:
- Pre-training BART model for the Italian Language☆16Updated 2 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆48Updated 3 years ago
- ☆20Updated 2 years ago
- 🔍 Multilingual Evaluation of English-Centric LLMs via Cross-Lingual Alignment☆11Updated 2 months ago
- Measuring the Mixing of Contextual Information in the Transformer☆29Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆81Updated 8 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Do Multilingual Language Models Think Better in English?☆41Updated last year
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆58Updated last year
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆27Updated last year
- Apps built using Inspired Cognition's Critique.☆58Updated 2 years ago
- ☆72Updated 2 years ago
- A python package to run inference with HuggingFace language and vision-language checkpoints wrapping many convenient features.☆27Updated 8 months ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆73Updated 9 months ago
- A curated list of research papers and resources on Cultural LLM.☆44Updated 8 months ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- Simple and scalable tools for data-driven pretraining data selection.☆24Updated 3 months ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- ☆11Updated last year
- Official Repo for the Paper "AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution o…☆14Updated 4 months ago
- PyTorch reimplementation of REALM and ORQA☆22Updated 3 years ago
- ☆54Updated last year
- CSCW 2023 Best Demo Award: Conversational AI Explanations to Support Human-AI Scientific Writing☆14Updated last year
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆30Updated 3 years ago
- ITALIC: An ITALian Intent Classification Dataset☆13Updated last year
- ☆65Updated last year
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆72Updated last year
- Code and data for the paper "Turning English-centric LLMs Into Polyglots: How Much Multilinguality Is Needed?"☆25Updated this week