noanabeshima / wikipedia-downloaderLinks
Downloads 2020 English Wikipedia articles as plaintext
☆25Updated 2 years ago
Alternatives and similar repositories for wikipedia-downloader
Users that are interested in wikipedia-downloader are comparing it to the libraries listed below
Sorting:
- ☆92Updated 3 years ago
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆86Updated 2 years ago
- A GPT-based generative LM for combined text and math formulas, leveraging tree-based formula encoding. Published as "Tree-Based Represent…☆41Updated 2 years ago
- ☆160Updated 4 years ago
- The data processing pipeline for the Koala chatbot language model☆118Updated 2 years ago
- ☆17Updated 9 months ago
- A library for squeakily cleaning and filtering language datasets.☆49Updated 2 years ago
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆43Updated last year
- Pre-training code for CrystalCoder 7B LLM☆56Updated last year
- Reward Model framework for LLM RLHF☆62Updated 2 years ago
- ☆32Updated 2 years ago
- ☆128Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆171Updated 3 months ago
- Small and Efficient Mathematical Reasoning LLMs☆73Updated last year
- Safety Score for Pre-Trained Language Models☆96Updated 2 years ago
- Repository for analysis and experiments in the BigCode project.☆128Updated last year
- Open Implementations of LLM Analyses☆107Updated last year
- ☆78Updated 2 years ago
- URL downloader supporting checkpointing and continuous checksumming.☆19Updated 2 years ago
- ☆172Updated 10 months ago
- Download, parse, and filter data from Court Listener, part of the FreeLaw projects. Data-ready for The-Pile.☆15Updated 2 years ago
- Data mapping framework for rust stuff☆42Updated this week
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- Finetune any model on HF in less than 30 seconds☆56Updated 2 months ago
- Script for downloading GitHub.☆98Updated last year
- 🤗 Disaggregators: Curated data labelers for in-depth analysis.☆67Updated 2 years ago
- Developing tools to automatically analyze datasets☆75Updated last year
- ☆85Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago