【Stanford ML Exercise4 Week5】Neural Networks Learning

Implement the backpropagation algorithm for neural networks and apply it to the task of hand-written digit recognition.

1. Neural Networks

  • implement the backpropagation algorithm to learn the parameters for the neural network.

1.1 Visualizing the data

Screen Shot 2017-06-26 at 10.24.42 PM.png

  • 5000 training examples

    each training example is a 20 pixel by 20 pixel grayscale image of the digit

    The 20 by 20 grid of pixels is “unrolled” into a 400-dimensional vector


1.2 Model representation

  • 3 layers – an input layer, a hidden layer and an output layer

Screen Shot 2017-06-26 at 10.29.10 PM.png

1.3 Feedforward and cost function

  • implement the cost function and gradient for the neural network
  • should not be regularizing the terms that correspond to the bias

Cost function with regularization:

Screen Shot 2017-07-06 at 10.44.14 PM

2. Backpropagation

  • compute the gradient for the neural network cost function


2.1 Sigmoid gradient

Gradient for the sigmoid function:


2.2 Random initialization

  • When training neural networks, it is important to randomly initialize the parameters for symmetry breaking.
epsilon init = 0.12;
W = rand(L out, 1 + L in) * 2 * epsilon init − epsilon init;

2.3 Backpropagation

Intuition behind the backpropagation algorithm:

  • Given a training example (x(t),y(t)), first run a “forward pass” to compute all the activations throughout the network
  • for each node j in layer l, compute an “error term” δ(l) that measures how much that node was “responsible” j for any errors in our output


Step 1-4 to implement backpropagation:



Stanford ML Week 6 learning Notes: Advice for Applying Machine Learning

Improve ml performance:

  1. Get more training examples: not certain to help —— fix high variance
  2. Try smaller sets of features: prevent overfitting ——- fix high variance
  3. Try getting additional features: more information ——– fix high bias
  4. Try adding polynomial features ——- fix high bias
  5. Try decreasing lambda ——- fix high bias
  6. Try increasing lambda ——- fix high variance

Evaluation Algorihm:

ML diagnostic: a test to gain insight what is/isn’t working about the algorithm

Model Selection and Train/Validation/Test Sets

Model selection: choose what degree polynomial to fit the model

How well this model generalize?

  1. Use test set to calculate J, problem: overly optimistic estimate of generalization error, reason: fit polynomial degree d based on the performance on test set.
  2. solution: do examination we don’t see before. Instead of using test set to select the model, using cv set to select the model, using test set to test.

Diagnosing Bias vs Variance

curve: x-axis: polynomial degree d, y-axis: error

underfit: high bias, training error high, cv error high (x-axis is d)

overfit: high variance, training error low, cv error high

Regularization and Bias/Variance

curve: x-axis: lambda, y-axis: error

underfit: high bias, large lambda

overfit: high variance, small lambda

With regularization parameter lambda increasing, training error low, cv error high

when lambda small: high variance, training error low, cv error high

when lambda large: high bias, training error high, cv error high

Learning Curves

learning curve: y-axis: error, x-axis: training set size

m small: training error low, cv error high

m large: training error high, cv error low

If a learning algorithm is suffering high bias, increasing training set size is not helping; if high variance, increasing size helps.


Prioritizing What to Work On

e.g. Build a spam classifier

How to spend time to make the model better?

  • Collect lots of data
  • Develop more features based on email routing
  • Develop more features based on message
  • Develop algorithm for misspelling

Error Analysis

  1. start with a model, test on cv data
  2. plot learning curve, decide more data or more feature
  3. error analysis: manually examine the examples

Error Metrics for Skewed Classes

skewed class: have a lot more examples for one class than the other classes

better way to examine whether a model is performing well:

  • Precision: True positive/# of predicted positive
  • Recall: True positive/# of actual positive

Trading Off Precision and Recall

increase threshold high: higher precision, lower recall

How to choose a good one:

F Score: 2*(PR)/(P+R)

Data for Machine Learning




1.1.1 Java程序的基本结构
要执行一个Java程序,首先要用javac命令编译它,然后再用java命令运行它。例如运行BinarySearch,首先要输入javac BinarySearch.java(这将生成一个叫BinarySearch.class的文件,其中含有这个程序的java字节码);然后再输入java BinarySearch(接着是一个白名单文件名)把控制权移交给这段字节码程序。
1.1.2 原始数据类型与表达式
运算符优先级:*,/,%      >>>>>    +,-
                         !>>> && >>> ||
数据类型转换:如果不会损失信息,数值会被自动提升为高级的数据类型。double to int 会截断小数部分,而不是四舍五入
1.1.3 语句
1.1.4 简便记法
1.1.5 数组 创建并初始化数组
  1. 声明数组的名字和类型
  2. 创建数组:需要指定数组长度(元素个数),关键字:new
  3. 初始化数组元素
double[] a; 声明数组a
a = new double[N]; 创建数组a
for (int i = 0; i < N; i ++) 初始化
    a[i] = 0.0 简化写法
double[] a = new double[N] 声明+创建+默认初始值
int[] a = {1,1,2,3,5,8} 声明+创建+赋值初始值
布尔型的默认初始值都是false 使用数组
数组一经创建,大小就是固定的 起别名
想将数组复制,应重新声明、创建并初始化一个数组,将原数组中的值一一赋予新数组的值。 二维数组
1.1.6 静态方法
签名 + 函数体
签名 = public static + 函数返回值 + 方法名 + 各种类型参数
  1. 总有一个最简单情况——第一条总是一个包含return的条件语句
  2. 总是去解决一个规模更小的子问题,这样才能收敛到最简单情况
  3. 调用的父问题和解决的子问题之间不能有交集
类的声明是public class  + 类名 + {静态方法}
存放类的文件的文件名和类名相同,扩展名是 .java