jxzly / Kaggle-American-Express-Default-Prediction-1st-solution
☆226Updated last year
Alternatives and similar repositories for Kaggle-American-Express-Default-Prediction-1st-solution:
Users that are interested in Kaggle-American-Express-Default-Prediction-1st-solution are comparing it to the libraries listed below
- ☆167Updated 4 years ago
- 1st Place Solution to the Curriculum Recommendations on Kaggle☆33Updated last year
- ☆36Updated 2 years ago
- ☆86Updated last year
- Kaggle home credit default risk competition☆55Updated 6 years ago
- Winning solution for the Kaggle Feedback Prize Challenge.☆65Updated 2 years ago
- XGBoost + Optuna☆692Updated 5 months ago
- Gradient boosting model for predicting credit default risk on Kaggle competition☆16Updated 4 years ago
- 🛍 A real-world e-commerce dataset for session-based recommender systems research.☆323Updated 9 months ago
- My toolbox for data analysis. :)☆174Updated last month
- 3rd Place Solution for the OTTO – Multi-Objective Recommender System Competition☆104Updated 2 years ago
- Data, Benchmarks, and methods submitted to the M5 forecasting competition☆602Updated last year
- An implementation of the focal loss to be used with LightGBM for binary and multi-class classification problems☆249Updated 5 years ago
- 31st place silver medal solution to USPPPM Kaggle competition☆20Updated 2 years ago
- Winning Solution of Kaggle Mechanisms of Action (MoA) Prediction.☆118Updated 3 years ago
- ☆162Updated 3 years ago
- GBST is an optimized distributed gradient boosting survival trees library that is implemented based on the XGBoost☆37Updated 4 years ago
- [kaggle] M5 Forecasting - Accuracy (4th place solution)☆37Updated 4 years ago
- ☆42Updated 3 years ago
- LightGBM + Optuna: Auto train LightGBM directly from CSV files, Auto tune them using Optuna, Auto serve best model using FastAPI. Inspire…☆35Updated 3 years ago
- 🥈 Silver Medal Solution to Kaggle H&M Personalized Fashion Recommendations☆68Updated 2 years ago
- ☆52Updated 2 years ago
- ☆88Updated 2 years ago
- (ICLR 2025) TabM: Advancing Tabular Deep Learning With Parameter-Efficient Ensembling☆214Updated 2 weeks ago
- 1st Place Solution for Eedi - Mining Misconceptions in Mathematics Kaggle Competition☆27Updated 2 months ago
- This repository contains code for 3rd place in the Feedback-Prize---English-Language-Learning which was hosted on kaggle☆20Updated 2 years ago
- Code repository for the online course "Feature Engineering for Time Series Forecasting".☆183Updated last year
- Feedback Prize - English Language Learning: 1st place solution code☆56Updated 2 years ago
- I share my solution for the Otto Competition, scoring LB 0.601, using Reranker, Transformers and GRU☆26Updated last year
- Kaggle Days Paris - Competitive GBDT Specification and Optimization Workshop☆92Updated 2 years ago