The Loss Surface of Deep and Wide Neural Networks

ICML 2017, 논문링크 : https://arxiv.org/abs/1704.08045


While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points. It has been argued that this is the case as all local minima are close to being globally optimal. We show that this is (almost) true, in fact almost all local minima are globally optimal, for a fully connected network with squared loss and analytic activation function given that the number of hidden units of one layer of the network is larger than the number of training points and the network structure from this layer on is pyramidal.

Assumption 3.2

  1. 학습데이터가 모두 다름. 즉, 모든 \( i\neq j \) 에 대해 \( x_{i}\neq x_{j}\)
  2. \( \sigma \) 가 해석적(analytic), 강단조증가(strictly monotonically increasing) 여야 함. 더불어서 유계(bounded) 이거나 선형함수보다 증가속도가 빠르면 안 됌
  3. Loss 함수가 \( \ell\in C^{2}(\mathbb{R}) \) 이고 \( \ell’(a) = 0 \) 이면 \( a \) 에서 최소값을 가져야 함

figure1.1


Theorem 3.4 Assumption 3.2 하에서, 학습데이터들이 선형적으로 독립(linearly independent) 이라고 하자. 이 때 식 (1) 의 모든 critical point \( (W_{l},b_{l})_{l=1}^{L} \) 들은 Weight 행렬들이 full-column rank 를 가질 때 global minimum 이다



Theorem 3.8 Assumption 3.2 하에서, 어떤 \( k\in [L-1] \) 에 대해 \( n_{k} \geq N-1 \) 을 만족한다고 가정하자.