eljefe6a / beamexampleLinks
An example Apache Beam project.
☆111Updated 8 years ago
Alternatives and similar repositories for beamexample
Users that are interested in beamexample are comparing it to the libraries listed below
Sorting:
- ☆81Updated 2 years ago
- Simple example for reading and writing into Kafka☆55Updated 5 years ago
- These are some code examples☆55Updated 5 years ago
- ☆48Updated 7 years ago
- Apache Spark and Apache Kafka integration example☆124Updated 7 years ago
- Get started with Apache Beam and Flink☆43Updated 9 years ago
- Spark structured streaming with Kafka data source and writing to Cassandra☆62Updated 6 years ago
- Wikipedia stream-processing demo using Kafka Connect and Kafka Streams.☆75Updated 8 years ago
- Docker Image and Kubernetes Configurations for Spark 2.x☆41Updated 6 years ago
- Code for Tutorial on designing clickstream analytics application using Hadoop☆55Updated 10 years ago
- Hadoop MapReduce tool to convert Avro data files to Parquet format.☆34Updated 12 years ago
- A Spark Streaming job reading events from Amazon Kinesis and writing event counts to DynamoDB☆93Updated 5 years ago
- SQL for Kafka Connectors☆99Updated last year
- Code repository for O'Reilly Hadoop Application Architectures book☆163Updated 10 years ago
- Examples of Spark 2.0☆212Updated 4 years ago
- Build configuration-driven ETL pipelines on Apache Spark☆161Updated 3 years ago
- A Spark WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR☆119Updated 9 years ago
- Kite SDK Examples☆99Updated 4 years ago
- Examples of how to use Cloud Bigtable both with GCE map/reduce as well as stand alone applications.☆232Updated this week
- Reference architecture for real-time stream processing with Apache Flink on Amazon EMR, Amazon Kinesis, and Amazon Elasticsearch Service.☆70Updated last year
- Structured Streaming Machine Learning example with Spark 2.0☆94Updated 8 years ago
- Examples on how to use the command line tools in Avro Tools to read and write Avro files☆153Updated last year
- ☆240Updated 4 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆91Updated last year
- Write your Spark data to Kafka seamlessly☆174Updated last year
- Big Data ETL and Utilities for Hadoop Map Reduce, Spark and Storm☆103Updated last year
- ☆70Updated 8 years ago
- StreamLine - Streaming Analytics☆164Updated 2 years ago
- Example projects for using Spark and Cassandra With DSE Analytics☆58Updated last month
- Real-world Spark pipelines examples☆83Updated 7 years ago