noanabeshima / wikipedia-downloaderLinks
Downloads 2020 English Wikipedia articles as plaintext
☆25Updated 2 years ago
Alternatives and similar repositories for wikipedia-downloader
Users that are interested in wikipedia-downloader are comparing it to the libraries listed below
Sorting:
- ☆90Updated 2 years ago
- ☆78Updated last year
- A library for squeakily cleaning and filtering language datasets.☆47Updated last year
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆81Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- ☆15Updated 2 months ago
- Script for downloading GitHub.☆95Updated 11 months ago
- Repository for analysis and experiments in the BigCode project.☆119Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆65Updated last year
- ☆39Updated 2 years ago
- The data processing pipeline for the Koala chatbot language model☆117Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year
- Techniques used to run BLOOM at inference in parallel☆37Updated 2 years ago
- The test set for Koala☆45Updated 2 years ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- A repository for transformer critique learning and generation☆90Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- ☆33Updated 2 years ago
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆74Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 9 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆18Updated 11 months ago
- Code for "Democratizing Reasoning Ability: Tailored Learning from Large Language Model", EMNLP 2023☆35Updated last year
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆180Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Pre-training code for CrystalCoder 7B LLM☆54Updated last year
- ☆16Updated 3 years ago
- Reward Model framework for LLM RLHF☆61Updated 2 years ago