Neural Network Optimisation Study: Comparing Adam vs RMSProp with Advanced Regularisation Techniques
Optimiser | Accuracy | Confidence Interval |
---|---|---|
Adam | 82.83% | ±4.78% |
RMSProp | 81.93% | ±4.30% |
Adam demonstrates slightly superior average performance with marginally higher variance.
Model Configuration | Precision | Recall | F1-Score |
---|---|---|---|
4.2.4 Adam | 83.85% | 67.54% | 74.74% |
4.2.4 RMSProp | 84.28% | 66.67% | 74.39% |
4.3.3 Adam | 82.35% | 68.43% | 74.64% |
4.3.3 RMSProp | 82.88% | 67.26% | 74.13% |
L2 Regularisation (0.01):
Successfully prevented overfitting by penalising large weights, resulting in more stable validation performance across both optimisers.
Dropout (0.1 rate):
Reduced neuron co-adaptation, improving model robustness and generalisation to unseen data.
Early Stopping:
Enhanced training efficiency by preventing unnecessary epochs, maintaining optimal model performance without overfitting.
Final Recommendation:
Deploy Model with Adam optimiser for production use, implementing comprehensive regularisation techniques and early stopping for optimal survivor prediction performance.