Testing

Of course we'd have to test our neural network.

First we load up the testing data:

testImgs, err := readImageFile(os.Open("t10k-images.idx3-ubyte"))
if err != nil {
log.Fatal(err)
}

testlabels, err := readLabelFile(os.Open("t10k-labels.idx1-ubyte"))
if err != nil {
log.Fatal(err)
}

testData := prepareX(testImgs)
testLbl := prepareY(testlabels)
shape := testData.Shape()
visualize(testData, 10, 10, "testData.png")

In the last line, we visualize the test data to ensure that we do indeed have the correct dataset:

Then we have the main testing loop. Do observe that it's extremely similar to the training loop - because it's the same neural network!

var correct, total float32
numExamples = shape[0]
batches = numExamples / bs
for b := 0; b < batches; b++ {
start := b * bs
end := start + bs
if start >= numExamples {
break
}
if end > numExamples {
end = numExamples
}

var oneimg, onelabel tensor.Tensor
if oneimg, err = testData.Slice(sli{start, end}); err != nil {
log.Fatalf("Unable to slice images (%d, %d)", start, end)
}
if onelabel, err = testLbl.Slice(sli{start, end}); err != nil {
log.Fatalf("Unable to slice labels (%d, %d)", start, end)
}
if err = oneimg.(*tensor.Dense).Reshape(bs, 1, 28, 28); err != nil {
log.Fatalf("Unable to reshape %v", err)
}

gorgonia.Let(x, oneimg)
gorgonia.Let(y, onelabel)
if err = vm.RunAll(); err != nil {
log.Fatal("Predicting (%d, %d) failed %v", start, end, err)
}
label, _ := onelabel.(*tensor.Dense).Argmax(1)
predicted, _ := m.outVal.(*tensor.Dense).Argmax(1)
lblData := label.Data().([]int)
for i, p := range predicted.Data().([]int) {
if p == lblData[i] {
correct++
}
total++
}
}

fmt.Printf("Correct/Totals: %v/%v = %1.3f ", correct, total, correct/total)

One difference is in the following snippet:

label, _ := onelabel.(*tensor.Dense).Argmax(1)
predicted, _ := m.outVal.(*tensor.Dense).Argmax(1)
lblData := label.Data().([]int)
for i, p := range predicted.Data().([]int) {
if p == lblData[i] {
correct++
}
total++
}

In the previous chapter, we wrote our own argmax function. Gorgonia's tensor package actually does provide a handy method for doing just that. But in order to understand what is going on, we will need to first look at the results.

The shape of m.outVal is (N, 10), where N is the batch size. The same shape also shows for onelabel.  (N, 10) means N rows of 10 columns. What can these 10 columns be? Well, of course they're the encoded numbers! So what we want is to find the maximum values amongst the column for each row. And that's the first dimension. Hence when a call to .ArgMax() is made, we specify 1 as the axis.

Therefore the result of the .Argmax() calls will have a shape (N). For each value in that vector, if they are the same for lblData and predicted, then we increment the correct counter. This gives us a way to count accuracy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.134.198