Well done Younghwan Chae on the development of competitive strategies to resolve learning rates using gradient-only surrogates (see https://t.co/KOXrCkyyg4). A robust and consistent #training approach for #deepneuralnetworks as…https://t.co/89CQT7AA63 h
250 followers
489 followers
@PierreAblin "Practical Mathematical Optimization" https://t.co/LBLPNSdeOU
250 followers
250 followers
@xtimv @bremen79 In general (including discontinuous functions) subgradients converge to wihtin a ball around non-negative gradient projection points - Chapter 8 gives an overview of the above: https://t.co/zTJ9CwLGTn