The document discusses calibration of modern neural networks. It notes that the softmax output of a network does not necessarily represent confidence. It examines ways to measure miscalibration in networks, including factors that can cause miscalibration like overfitting. Methods to calibrate networks are presented, such as temperature scaling. The findings of comparing these calibration techniques to related works are summarized.