huggingface / fineweb-2
β64Updated last month
Alternatives and similar repositories for fineweb-2:
Users that are interested in fineweb-2 are comparing it to the libraries listed below
- β115Updated 3 months ago
- Lightweight demos for finetuning LLMs. Powered by π€ transformers and open-source datasets.β66Updated 2 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimizationβ49Updated 10 months ago
- This is the official repository for Inheritune.β108Updated 3 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignmentβ53Updated 4 months ago
- Code for Zero-Shot Tokenizer Transferβ119Updated this week
- This project is a collection of fine-tuning scripts to help researchers fine-tune Qwen 2 VL on HuggingFace datasets.β59Updated 4 months ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuningβ89Updated last year
- π’ Data Toolkit for Sailor Language Modelsβ83Updated 3 weeks ago
- A pipeline for LLM knowledge distillationβ83Updated 5 months ago
- β47Updated 4 months ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Ayaβ99Updated last week
- β108Updated 3 months ago
- β46Updated 2 months ago
- β31Updated 6 months ago
- The RedStone repository includes code for preparing extensive datasets used in training large language models.β30Updated last month
- Codebase accompanying the Summary of a Haystack paper.β75Updated 3 months ago
- Code for NeurIPS LLM Efficiency Challengeβ54Updated 9 months ago
- β56Updated 3 months ago
- β137Updated 9 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"β50Updated 3 months ago
- Fine-tune ModernBERT on a large Dataset with Custom Tokenizer Trainingβ52Updated 3 weeks ago
- Data preparation code for Amber 7B LLMβ84Updated 8 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]β129Updated 2 months ago
- Official implementation of "GPT or BERT: why not both?"β45Updated 2 months ago
- Prune transformer layersβ67Updated 7 months ago
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)β74Updated 2 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messagesβ41Updated last month
- Let's build better datasets, together!β244Updated 3 weeks ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 laβ¦β45Updated last year