qcri / sPCALinks
Scalable PCA (sPCA) is a scalable implementation of Principal component analysis algorithm on top of Spark
☆12Updated 10 years ago
Alternatives and similar repositories for sPCA
Users that are interested in sPCA are comparing it to the libraries listed below
Sorting:
- Benchmarks of artificial neural network library for Spark MLlib☆11Updated 9 years ago
- Gaussian Mixture Model Implementation in Pyspark☆32Updated 10 years ago
- A CPU and GPU-accelerated matrix library for data mining☆266Updated 4 years ago
- Splash Project for parallel stochastic learning☆94Updated 8 years ago
- Distributed Matrix Library☆72Updated 8 years ago
- Distributed DataFrame: Productivity = Power x Simplicity For Scientists & Engineers, on any Data Engine☆167Updated 4 years ago
- Distributed solver library for large-scale structured output prediction, based on Spark. Project website:☆17Updated 9 years ago
- Benchmarks of BLAS libraries with Scala interface☆30Updated 9 years ago
- CUDA kernel and JNI code which is called by Apache Spark's MLlib.☆19Updated 9 years ago
- Spark library for doing exploratory data analysis in a scalable way☆44Updated 9 years ago
- A scala-based feature generation and modeling framework☆61Updated 7 years ago
- A global, black box optimization engine for real world metric optimization.☆66Updated 10 years ago
- GPU Acceleration for Apache Spark☆34Updated 10 years ago
- Library for GPU-related statistical functions☆84Updated 12 years ago
- Scala client for the Lightning data visualization server (WIP)☆47Updated 6 years ago
- ☆110Updated 8 years ago
- Code to allow running BIDMach on Spark including HDFS integration and lightweight sparse model updates (Kylix).☆15Updated 5 years ago
- MLeap allows for easily putting Spark ML pipelines into production☆78Updated 8 years ago
- Sketching-based Distributed Matrix Computations for Machine Learning☆100Updated 7 years ago
- Another, hopefully better, implementation of ALS on Spark☆14Updated 10 years ago
- Functional, Typesafe, Declarative Data Pipelines☆139Updated 7 years ago
- Quick summary: This code implements a spectral (third order tensor decomposition) learning method for learning LDA topic model on Spark.☆105Updated 7 years ago
- Automatic offload of user-written Spark kernels to accelerators☆18Updated 8 years ago
- BOPP: Bayesian Optimization for Probabilistic Programs☆114Updated 7 years ago
- Yggdrasil: Faster Decision Trees Using Column Partitioning in Spark☆30Updated 7 years ago
- A library that allows serialization of SciKit-Learn estimators into PMML☆71Updated 6 years ago
- SparkTDA is a package for Apache Spark providing Topological Data Analysis Functionalities.☆46Updated 7 years ago
- ☆57Updated 8 years ago
- Mirror of Apache Spark☆10Updated 9 years ago
- Scalable Machine Learning in Scalding☆361Updated 7 years ago