Predicting Titanic Survivor Outcomes: Adam vs RMSprop Analysis
Load Titanic dataset, handle missing values (Age, Embarked), drop irrelevant columns, apply one-hot encoding for categorical variables
Split into train/validation/test sets (80/10/10), apply stratification to maintain class balance, standardise features
Create identical neural network architectures with Adam and RMSprop optimisers, train for 10 epochs, evaluate performance
Add L2 regularisation (0.01) and dropout (0.1) to prevent overfitting, compare performance across optimisers
Implement early stopping callback with patience=1, monitor validation loss to optimise training efficiency
Perform 5-fold stratified cross-validation to ensure robust performance estimates and reduce overfitting bias
Model Configuration | Accuracy | Precision | Recall | F1 Score | Loss |
---|---|---|---|---|---|
Model - Adam | 0.8249 | 0.8385 | 0.6754 | 0.7474 | 0.4641 |
Model - RMSprop | 0.8238 | 0.8428 | 0.6667 | 0.7439 | 0.6414 |
Model - Adam | 0.8216 | 0.8235 | 0.6843 | 0.7464 | Not specified |
Model - RMSprop | 0.8204 | 0.8288 | 0.6726 | 0.7413 | Not specified |
Model with Adam optimizer achieved the highest balanced performance with F1 Score: 0.7474 and Accuracy: 82.49%
Adam demonstrates slight superiority in balanced performance metrics, particularly F1 score and overall accuracy. RMSprop excels in precision, making fewer false positive predictions.
L2 regularisation and dropout successfully reduced overfitting whilst maintaining model performance. Both techniques improved validation stability across optimisers.
Early stopping proved valuable for training efficiency without compromising performance. Optimal for preventing overfitting whilst reducing computational costs.
Robust validation confirmed consistent performance across data splits. Confidence intervals overlap, suggesting practical equivalence between optimisers.
Choice between optimisers depends on specific requirements: Adam for balanced performance, RMSprop for high precision scenarios, considering recall needs.
Performance differences are marginal but consistent. Adam shows ~0.9% higher mean accuracy with slightly higher variance in cross-validation results.
Deploy Model with Adam optimiser as the primary production model, with capability to switch to RMSprop configuration for high-precision requirements. Implement comprehensive monitoring and regular performance evaluation to maintain optimal predictive accuracy.
Neural Network Optimiser Analysis | Comprehensive Machine Learning Model Evaluation
Optimising predictive performance through systematic optimiser comparison and validation