qubole / qds-sdk-pyLinks
Python SDK for accessing Qubole Data Service
☆52Updated 3 months ago
Alternatives and similar repositories for qds-sdk-py
Users that are interested in qds-sdk-py are comparing it to the libraries listed below
Sorting:
- ☆54Updated 7 years ago
- Gallery of Apache Zeppelin notebooks☆217Updated 5 years ago
- AWS bootstrap scripts for Mozilla's flavoured Spark setup.☆47Updated 5 years ago
- Coding exercises for Apache Spark☆104Updated 10 years ago
- Apache Zeppelin on Kubernetes.☆28Updated 6 years ago
- Example stream processing job, written in Scala with Apache Beam, for Google Cloud Dataflow☆30Updated 8 years ago
- A set of tools for working with Omniture daily data files (hit_data.tsv) in big or small tools like Spark, Hadoop or just Python.☆38Updated 6 years ago
- Google BigQuery support for Spark, SQL, and DataFrames☆155Updated 5 years ago
- Luigi Plugin for Hubot☆36Updated 8 years ago
- Vagrant projects for various use-cases with Spark, Zeppelin, IPython / Jupyter, SparkR☆34Updated 9 years ago
- PyAthenaJDBC is an Amazon Athena JDBC driver wrapper for the Python DB API 2.0 (PEP 249).☆95Updated last year
- Amazon Elastic MapReduce code samples☆63Updated 9 years ago
- CLI tool to launch Spark jobs on AWS EMR☆67Updated last year
- Example unit tests for Apache Spark Python scripts using the py.test framework☆84Updated 9 years ago
- Apache Spark AWS Lambda Executor (SAMBA)☆44Updated 6 years ago
- Deploy dask-distributed on google container engine using kubernetes☆40Updated 6 years ago
- An example Apache Beam project.☆111Updated 8 years ago
- Simple Spark example of generating table stats for use of data quality checks☆28Updated 8 years ago
- Example for an airflow plugin☆49Updated 8 years ago
- An example PySpark project with pytest☆16Updated 7 years ago
- An external PySpark module that works like R's read.csv or Panda's read_csv, with automatic type inference and null value handling. Parse…☆90Updated 9 years ago
- Create Parquet files from CSV☆68Updated 7 years ago
- A Spark WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR☆118Updated 9 years ago
- Make your libraries magically appear in Databricks.☆47Updated last year
- Implementations of the Portable Format for Analytics (PFA)☆128Updated 2 years ago
- Arbalest is a Python data pipeline orchestration library for Amazon S3 and Amazon Redshift. It automates data import into Redshift and ma…☆41Updated 9 years ago
- Unit and integration testing with PySpark can be tough to figure out, let's make that easier.☆22Updated 9 years ago
- Simplify getting Zeppelin up and running☆56Updated 8 years ago
- DataPipeline for humans.☆250Updated 2 years ago
- Learn the pyspark API through pictures and simple examples☆170Updated 4 years ago