colbyford / sparkitectureLinks
A collection of “cookbook-style” scripts for simplifying data engineering and machine learning in Apache Spark.
☆13Updated 4 years ago
Alternatives and similar repositories for sparkitecture
Users that are interested in sparkitecture are comparing it to the libraries listed below
Sorting:
- PySpark phonetic and string matching algorithms☆41Updated last year
- A Spark-based data comparison tool at scale which facilitates software development engineers to compare a plethora of pair combinations o…☆52Updated 7 months ago
- ☆25Updated 7 years ago
- Apache NiFi NLP Processor☆18Updated 2 years ago
- ☆10Updated 3 years ago
- Read Delta tables without any Spark☆47Updated last year
- Data validation library for PySpark 3.0.0☆33Updated 3 years ago
- DataQuality for BigData☆147Updated 2 years ago
- PySpark Algorithms Book: https://www.amazon.com/dp/B07X4B2218/ref=sr_1_2☆88Updated 6 years ago
- Binding the GDELT universe in a Spark environment☆26Updated 2 years ago
- Support for generating modern platforms dynamically with services such as Kafka, Spark, Streamsets, HDFS, ....☆80Updated last week
- ☆19Updated 8 years ago
- Demo of DuckDB Spark API implements. Same Pyspark code, but DuckDB under the hood☆15Updated 2 years ago
- Machine Learning Pipeline Stages for Spark (exposed in Scala/Java + Python)☆74Updated 2 years ago
- Real-world Spark pipelines examples☆83Updated 7 years ago
- A single docker image that combines Neo4j Mazerunner and Apache Spark GraphX into a powerful all-in-one graph processing engine☆45Updated 6 years ago
- ☆22Updated last year
- Asynchronous actions for PySpark☆48Updated 4 years ago
- Apache NiFi Custom Processor Extracting Text From Files with Apache Tika☆34Updated 2 years ago
- Code examples for the Introduction to Kubeflow course☆14Updated 5 years ago
- A systematic Benchmarking on the performance of Spark-SQL for processing Vast RDF datasets☆14Updated 3 years ago
- Bulk loading of large data sets into Neo4j☆23Updated last week
- This repository contains queries and data from creating a dev.to/Wikidata Software Knowledge Graph using neosemantics and APOC NLP.☆35Updated 3 years ago
- ☆107Updated 3 years ago
- Apache-Spark based Data Flow(ETL) Framework which supports multiple read, write destinations of different types and also support multiple…☆26Updated 4 years ago
- A Scalable Data Cleaning Library for PySpark.☆29Updated 6 years ago
- Mastering Spark for Data Science, published by Packt☆49Updated 3 years ago
- How to manage Slowly Changing Dimensions with Apache Hive☆55Updated 6 years ago
- Data processing and modelling framework for automating tasks (incl. Python & SQL transformations).☆120Updated 4 months ago
- Fuzzy matching function in spark (https://spark-packages.org/package/itspawanbhardwaj/spark-fuzzy-matching)☆24Updated 6 years ago