Se ha denunciado esta presentación.
Se está descargando tu SlideShare. ×

Early Stopping in Deep Learning

Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Próximo SlideShare
Batch Normalization
Batch Normalization
Cargando en…3
×

Eche un vistazo a continuación

1 de 1 Anuncio

Más Contenido Relacionado

Anuncio

Early Stopping in Deep Learning

  1. 1. 1. We stop training process when we do not see any improvement in the validation error at the end of epoch. 2. Key parameters: 1. Patience – How many epochs of no improvement we need to wait to finally stop the training. 2. Delta – What is the minimum change in KPI that can be termed as a real improvement. For example, improvement of 0.000001% in validation error can be called as not an improvement as it is minor. 3. Keep best weights – Let’s say validation error keeps reducing from epoch 1 to 10 and after 10, it starts increasing. We have a patience of 4 which makes us wait till epoch 14 to stop training process. In this scenario, the best validation error was at the end of epoch 10. Hence, we keep the weights that were used in epoch 10. Early Stopping

×