We also made some modifications to the model by having the data undersampled and oversampled (with imbalance learn).
The results are shown here:
-> Random Undersampling
| Classifiers | F1 Score | Accuracy Score | Precision Score | Recall Score | ROC AUC Score |
| Random Guessing | 0.501245 | 0.501367 | 0.501347 | 0.501149 | 0.501358 |
| Logistic Regression | 0.944553 | 0.945775 | 0.966213 | 0.923844 | 0.945772 |
| Neural Network | 0.957754 | 0.957991 | 0.963063 | 0.952695 | 0.958023 |
| Random Forest | 0.995382 | 0.995362 | 0.99106 | 0.999742 | 0.995364 |
| Gaussian Naive Bayes | 0.87051 | 0.877277 | 0.920874 | 0.825597 | 0.877288 |
-> Oversampling with Imbalance Learn
| Classifiers | F1 Score | Accuracy Score | Precision Score | Recall Score | ROC AUC Score |
| Random Guessing | 0.498242 | 0.500363 | 0.497933 | 0.498552 | 0.500346 |
| Logistic Regression | 0.907205 | 0.910514 | 0.93717 | 0.879102 | 0.910364 |
| Neural Network | 0.928982 | 0.929131 | 0.926331 | 0.932271 | 0.929096 |
| Random Forest | 0.994949 | 0.994948 | 0.990014 | 0.999934 | 0.994972 |
| Gaussian Naive Bayes | 0.800169 | 0.820146 | 0.893598 | 0.72521 | 0.819687 |

Thank you for reading our journey from start to end of this project.