Disclaimer: Dieser Thread wurde aus dem alten Forum importiert. Daher werden eventuell nicht alle Formatierungen richtig angezeigt. Der ursprüngliche Thread beginnt im zweiten Post dieses Threads.

**Problem 10.1 OuputLayer Testcase Issue**

Hi,

I think I found some conceptual error in how the OuputLayer Testcase checks the [m]backward(Blob expectedOutput, Blob weightsBefore)[/m] Method. This confuses me on some level, so I want to ask how to correctly implement it.

Firstly, have a look at slide 387 (page 101 on https://kwarc.info/teaching/AI/slides-part6.pdf) , the pseudo code.

There delta(j) or delta(i) contain a part g’(in_i) (Typo Alert: I believe it should also be g’(in_j) in the first occurrence…).

(g(x) is the activation function, g’(x) is the derivate)

in_i is defined as the weighted sum of each inputs to the node i

a_i is defined as g(in_i)

So, if we implement the function [m]backward(Blob expectedOutput, Blob weightsBefore)[/m] , so that it correctly uses g’(in_i) the results are:

```
Testing OutputLayer-Backward-Method using EuclideanLoss...
Expected: -0.09868445 Calculated: -0.09867793 Calculated*0.5: -0.049338967
Expected: 0.5277899 Calculated: 0.4984116 Calculated*0.5: 0.2492058
testCaseOutputBackward success: true (Maximum allowed deviation of 0.1)
```

That made me look closer at this issue, as 0.5277899 and 0.4984116 are not really that close.

If we implement the function, so that it uses g’(a_i) , which is quite simple, as a_i shall be contained in the [m]output[/m] Blob, we get these results:

```
Testing OutputLayer-Backward-Method using EuclideanLoss...
Expected: -0.09868445 Calculated: -0.09868445 Calculated*0.5: -0.049342226
Expected: 0.5277899 Calculated: 0.5277899 Calculated*0.5: 0.26389495
testCaseOutputBackward success: true (Maximum allowed deviation of 0.1)
```

So it seems as this test wants us to use g’(a_i) = g’(g(in)) …? Or am I missing any other point here?

(And obviuosly, this issue is then present in the FullyConnected class as well)

OK, as I do speak German, reading through https://fsi.cs.fau.de/forum/thread/14924-Assignment-9 (the thread form one year ago) it seems that this testcase is indeed wrong and should have been removed…

Sorry I still have to correct myself once again, I actually had forgot to use the bias in one case… that changes the numbers to:

when using g’(in_i):

```
Testing OutputLayer-Backward-Method using EuclideanLoss...
Expected: -0.09868445 Calculated: -0.09867793 Calculated*0.5: -0.049338967
Expected: 0.5277899 Calculated: 0.5192411 Calculated*0.5: 0.25962055
testCaseOutputBackward success: true (Maximum allowed deviation of 0.1)
```

But still, I believe that my original point holds.

Yep you are totaly right. The whole testcase itself works due to the big „allowed deviation“ of 0.1. But the numbers calculated are wrong. So if your numbers differ slightly, but the testcase still outputs „success“, dont worry, you did everything fine.

So basically the testcase should still work, but the exact numbers differ. So yeah the testcase probably should be removed, but it still works. That is why we did not remove it. However we really should remember for the future to change it to the correct numbers.