logicalclocks / hops-util-pyLinks
Utility Library for Hopsworks. Issues can be posted at https://community.hopsworks.ai
☆27Updated last year
Alternatives and similar repositories for hops-util-py
Users that are interested in hops-util-py are comparing it to the libraries listed below
Sorting:
- Examples for Deep Learning/Feature Store/Spark/Flink/Hive/Kafka jobs and Jupyter notebooks on Hops☆118Updated 2 years ago
- Distribution transparent Machine Learning experiments on Apache Spark☆91Updated last year
- Joblib Apache Spark Backend☆249Updated 4 months ago
- ☆162Updated 4 years ago
- A collaborative feature engineering system built on JupyterHub☆94Updated 6 years ago
- MLOps Platform☆272Updated 9 months ago
- Comet-For-MLFlow Extension☆66Updated last year
- A tool and library for easily deploying applications on Apache YARN☆144Updated last year
- HandySpark - bringing pandas-like capabilities to Spark dataframes☆196Updated 6 years ago
- [ARCHIVED] Dask support for distributed GDF object --> Moved to cudf☆136Updated 6 years ago
- A simplified version of featuretools for Spark☆31Updated 6 years ago
- Common library for serving TensorFlow, XGBoost and scikit-learn models in production.☆139Updated last year
- XGBoost GPU accelerated on Spark example applications☆53Updated 3 years ago
- Deploy dask on YARN clusters☆69Updated last year
- Projects developed by Domino's R&D team☆78Updated 3 years ago
- Train TensorFlow models on YARN in just a few lines of code!☆89Updated last year
- Asynchronous actions for PySpark☆47Updated 3 years ago
- ☆96Updated 5 years ago
- [ARCHIVED] Moved to github.com/NVIDIA/spark-xgboost-examples☆71Updated 5 years ago
- A simple, extensible library for developing AutoML systems☆175Updated 2 years ago
- Python client library for the Openscoring REST web service☆32Updated 3 years ago
- ☆31Updated 3 years ago
- Automated Data Science and Machine Learning library to optimize workflow.☆104Updated 2 years ago
- Easy to use library to bring Tensorflow on Apache Spark☆295Updated last year
- Tools for faster and optimized interaction with Teradata and large datasets.☆17Updated 7 years ago
- Jupyter kernel for scala and spark☆189Updated last year
- Monitor Apache Spark from Jupyter Notebook☆172Updated 3 years ago
- Data Exploration in PySpark made easy - Pyspark_dist_explore provides methods to get fast insights in your Spark DataFrames.☆103Updated 5 years ago
- The deepr module provide abstractions (layers, readers, prepro, metrics, config) to help build tensorflow models on top of tf estimators☆53Updated last year
- Spark implementation of computing Shapley Values using monte-carlo approximation☆75Updated 2 years ago