GoogleCloudPlatform / dataproc-pubsub-spark-streaming
☆31Updated 6 years ago
Alternatives and similar repositories for dataproc-pubsub-spark-streaming:
Users that are interested in dataproc-pubsub-spark-streaming are comparing it to the libraries listed below
- Example Spark applications that run on Kubernetes and access GCP products, e.g., GCS, BigQuery, and Cloud PubSub☆37Updated 7 years ago
- Scalable CDC Pattern Implemented using PySpark☆18Updated 5 years ago
- Sample code with integration between Data Catalog and Hive data source.☆25Updated last month
- ☆47Updated 10 months ago
- A Giter8 template for scio☆31Updated last month
- Google BigQuery support for Spark, Structured Streaming, SQL, and DataFrames with easy Databricks integration.☆70Updated last year
- Cloud Spanner Connector for Apache Spark☆17Updated 2 months ago
- Stream Avro SpecificRecord objects in BigQuery using Cloud Dataflow☆13Updated 3 years ago
- Oozie Workflow to Airflow DAGs migration tool☆88Updated 2 weeks ago
- Mirror of Apache Beam☆10Updated 4 years ago
- Multi Cloud Data Tokenization Solution By Using Dataflow and Cloud DLP☆90Updated 7 months ago
- Spark on Kubernetes infrastructure Docker images repo☆37Updated 2 years ago
- Cloud Dataproc: Samples and Utils☆201Updated 2 months ago
- Solution Accelerators for Serverless Spark on GCP, the industry's first auto-scaling and serverless Spark as a service☆66Updated 10 months ago
- Bigquery bundle for Apache NiFi☆15Updated 5 years ago
- Lighthouse is a library for data lakes built on top of Apache Spark. It provides high-level APIs in Scala to streamline data pipelines an…☆61Updated 6 months ago
- The iterative broadcast join example code.☆69Updated 7 years ago
- Contains example dags and terraform code to create a composer with a node pool to run pods☆13Updated 4 years ago
- These are some code examples☆55Updated 5 years ago
- Composable filesystem hooks and operators for Apache Airflow.☆17Updated 3 years ago
- type-class based data cleansing library for Apache Spark SQL☆78Updated 5 years ago
- ☆66Updated 7 months ago
- ☆81Updated last year
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated 10 months ago
- Make your libraries magically appear in Databricks.☆47Updated last year
- PySpark data-pipeline testing and CICD☆28Updated 4 years ago
- Spark pipelines that correspond to a series of Dataflow examples.☆27Updated 5 years ago
- A full example of my blog post regarding Sparks stateful streaming (http://asyncified.io/2016/07/31/exploring-stateful-streaming-with-apa…☆34Updated 7 years ago
- Minikube for big data with Scala and Spark☆15Updated 5 years ago
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆30Updated 4 months ago