r-three / common-pileLinks
Code for collecting, processing, and preparing datasets for the Common Pile
☆248Updated 4 months ago
Alternatives and similar repositories for common-pile
Users that are interested in common-pile are comparing it to the libraries listed below
Sorting:
- ☆217Updated 2 months ago
- Python library to use Pleias-RAG models☆67Updated 8 months ago
- An introduction to LLM Sampling☆79Updated last year
- ☆59Updated last year
- Small python package to measure OCR quality and other related metrics.☆25Updated last year
- ☆101Updated 7 months ago
- Datamodels for hugging face tokenizers☆86Updated last week
- ☆261Updated 9 months ago
- ☆53Updated last year
- Code for SaGe subword tokenizer (EACL 2023)☆27Updated last year
- ☆59Updated last month
- ☆92Updated 3 weeks ago
- code for training & evaluating Contextual Document Embedding models☆202Updated 7 months ago
- ☆90Updated last month
- Crowd-sourced lists of urls to help Common Crawl crawl under-resourced languages. See https://github.com/commoncrawl/web-languages-code/ …☆68Updated this week
- ☆67Updated last year
- A massively multilingual modern encoder language model☆118Updated 2 months ago
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated 2 months ago
- Pre-train Static Word Embeddings☆94Updated 4 months ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆189Updated 6 months ago
- BPE modification that implements removing of the intermediate tokens during tokenizer training.☆25Updated last year
- Alice in Wonderland code base for experiments and raw experiments data☆131Updated 3 months ago
- State-of-the-art paired encoder and decoder models (17M-1B params)☆54Updated 5 months ago
- Download, parse, and filter data from Phil Papers. Data-ready for The-Pile.☆19Updated 2 years ago
- ☆90Updated 6 months ago
- An attribution library for LLMs☆46Updated last year
- A robust web archive analytics toolkit☆126Updated 2 months ago
- ☆150Updated 4 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆67Updated 3 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆150Updated last week