cocrawler / cdx_toolkitLinks
A toolkit for CDX indices such as Common Crawl and the Internet Archive's Wayback Machine
☆195Updated last month
Alternatives and similar repositories for cdx_toolkit
Users that are interested in cdx_toolkit are comparing it to the libraries listed below
Sorting:
- Streaming WARC/ARC library for fast web archive IO☆441Updated last year
- Index Common Crawl archives in tabular format☆124Updated this week
- A command-line tool for using CommonCrawl Index API at http://index.commoncrawl.org/☆205Updated 7 years ago
- Fast and robust date extraction from web pages, with Python or on the command-line☆142Updated last month
- Process Common Crawl data with Python and Spark☆451Updated last month
- Statistics of Common Crawl monthly archives mined from URL index files☆205Updated last week
- A library to extract a publication date from a web page, along with a measure of the accuracy.☆41Updated 6 years ago
- Tools to construct and process Common Crawl webgraphs☆103Updated this week
- Tools for bulk indexing of WARC/ARC files on Hadoop, EMR or local file system.☆47Updated 8 years ago
- An Apache Spark framework for easy data processing, extraction as well as derivation for web archives and archival collections, developed…☆155Updated 2 months ago
- Please note that the warc-indexer tool & code is now supported by NetArchiveSuite. The 'warc-indexer' directory and code that exists in t…☆131Updated last month
- Python library for reading and writing warc files☆246Updated 3 years ago
- A spaCy wrapper for DBpedia Spotlight☆112Updated 2 years ago
- Article extraction benchmark: dataset and evaluation scripts☆342Updated 3 months ago
- A spaCy wrapper of Entity-Fishing (component) for named entity disambiguation and linking on Wikidata☆169Updated 3 years ago
- A spaCy wrapper of OpenTapioca for named entity linking on Wikidata☆95Updated 2 years ago
- A machine learning tool for fishing entities☆266Updated 7 months ago
- CLI for loading Wikidata subsets (or all of it) into Elasticsearch☆71Updated 3 years ago
- CoCrawler is a versatile web crawler built using modern tools and concurrency.☆191Updated 3 years ago
- DKPro C4CorpusTools is a collection of tools for processing CommonCrawl corpus, including Creative Commons license detection, boilerplate…☆52Updated 5 years ago
- Now included in rigour☆152Updated last month
- Analyze and extract Wikipedia article text and attributes and store them into an ElasticSearch index or to json files (multilingual suppo…☆47Updated 2 years ago
- Demonstration of using Python to process the Common Crawl dataset with the mrjob framework☆168Updated 3 years ago
- Python tools for interacting with Wikidata☆159Updated 2 years ago
- 📂 Additional lookup tables and data resources for spaCy☆113Updated 6 months ago
- A polite and user-friendly downloader for Common Crawl data☆63Updated 4 months ago
- Extract text from HTML☆135Updated 5 years ago
- Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further lang…☆132Updated last year
- Generate a SQLite database from Wikipedia & Wikidata dumps.☆35Updated last year
- Detect and visualize text reuse☆119Updated last year