The Project Conclusion

We also made some modifications to the model by having the data undersampled and oversampled (with imbalance learn).

The results are shown here:

-> Random Undersampling

ClassifiersF1 ScoreAccuracy ScorePrecision ScoreRecall ScoreROC AUC Score
Random Guessing0.5012450.5013670.5013470.5011490.501358
Logistic Regression0.9445530.9457750.9662130.9238440.945772
Neural Network0.9577540.9579910.9630630.9526950.958023
Random Forest0.9953820.9953620.991060.9997420.995364
Gaussian Naive Bayes0.870510.8772770.9208740.8255970.877288
Random Under sampling

-> Oversampling with Imbalance Learn

ClassifiersF1 ScoreAccuracy ScorePrecision ScoreRecall ScoreROC AUC Score
Random Guessing0.4982420.5003630.4979330.4985520.500346
Logistic Regression0.9072050.9105140.937170.8791020.910364
Neural Network0.9289820.9291310.9263310.9322710.929096
Random Forest0.9949490.9949480.9900140.9999340.994972
Gaussian Naive Bayes0.8001690.8201460.8935980.725210.819687
Over sampling with Imbalance Learn

Thank you for reading our journey from start to end of this project.

Leave a comment

Design a site like this with WordPress.com
Get started