facebookresearch / belebeleLinks
Repo for the Belebele dataset, a massively multilingual reading comprehension dataset.
β337Updated 11 months ago
Alternatives and similar repositories for belebele
Users that are interested in belebele are comparing it to the libraries listed below
Sorting:
- DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AIβ517Updated 10 months ago
- π¬ Language Identification with Support for More Than 2000 Labels -- EMNLP 2023β175Updated 3 weeks ago
- Manage scalable open LLM inference endpoints in Slurm clustersβ277Updated last year
- An open collection of implementation tips, tricks and resources for training large language modelsβ489Updated 2 years ago
- β118Updated 11 months ago
- Easily embed, cluster and semantically label text datasetsβ584Updated last year
- Salesforce open-source LLMs with 8k sequence length.β722Updated 10 months ago
- Let's build better datasets, together!β264Updated 11 months ago
- Pipeline for pulling and processing online language model pretraining data from the webβ178Updated 2 years ago
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogsβ115Updated 2 years ago
- β556Updated last year
- A large-scale information-rich web dataset, featuring millions of real clicked query-document labelsβ345Updated 11 months ago
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023β71Updated last year
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedbackβ97Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generationβ220Updated 2 years ago
- Build, evaluate, understand, and fix LLM-based appsβ492Updated last year
- FastFit β‘ When LLMs are Unfit Use FastFit β‘ Fast and Effective Text Classification with Many Classesβ214Updated 2 months ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMAβ302Updated 2 years ago
- Datasets collection and preprocessings framework for NLP extreme multitask learningβ189Updated 5 months ago
- A library for preparing data for machine translation research (monolingual preprocessing, bitext mining, etc.) built by the FAIR NLLB teβ¦β287Updated last month
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for freeβ232Updated last year
- The pipeline for the OSCAR corpusβ174Updated last month
- batched lorasβ347Updated 2 years ago
- Reverse Instructions to generate instruction tuning data with corpus examplesβ216Updated last year
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigsβ¦β184Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRAβ631Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first appβ¦β170Updated last year
- An open collection of methodologies to help with successful training of large language models.β541Updated last year
- The FLORES+ Machine Translation Benchmarkβ109Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such asβ¦β357Updated 2 years ago