Assignment 14

Goto Betreff

Disclaimer: Dieser Thread wurde aus dem alten Forum importiert. Daher werden eventuell nicht alle Formatierungen richtig angezeigt. Der ursprüngliche Thread beginnt im zweiten Post dieses Threads.

Assignment 14
Hello,
I have a question regarding Problem 14.1.2.

First of all, I do not know what this methods are supposed to do, a small description would be nice.

And second, which methods are we supposed to implement. At the beginning of the problem description, we are told that all necessary classes are in the subfolder. Hence, we should compute ConstantBias and RandomWeight. But as I took a look at the structure, it seems to me that we are supposed to implement BiasFillers and WeightFillers.

An idea regarding this topic would be nice =)


This is what I assume:

When you first create the neural network you have to initialize each unit. WeightFiller and BiasFiller are just interfaces. So no changes needed there (also we should only change files in the folder with our name in it as you mentioned). When initializing the network you dont know anything about what you are going to learn. You only have two choices of supplying values for them. Either random or constant. Given the names of the two files you can derive what you should do in each of them.

1 „Gefällt mir“

How are the weights ordered?
I continue to get the wrong results in the fullyconnected layer test. I got 2 questions here:

  1. How are the weights ordered in self.weights? I assume that they are ordered like this: neuron0Weight0, neuron0Weight1, neuron0Weight2, neuron1Weight0, neuron1Weight1, neuron1Weight2… assuming a layer with 3 neurons in the previous layer and 5 in the current one.

  2. According to slide 878 all I have to do is sum over the inputs weighted by their weight and then apply the activation function. Am I missing something here? Because for me the first value of the result blob is correct but the remaining ones are not.

Any hints appreciated :smiley:

1 „Gefällt mir“

Ok, let me clarify a bit:

Only change the files in the subfolder with your name.

Task 1 asks you to implement the tanh-activation function. In the backpropagation algorithm (slide 892) you need both the activation function (to propagate the inputs forward) and its derivative (to propgate the deltas backward). Hence you are asked to implement these two methods.

Task 2 requires you to implement the WeightFiller and BiasFiller abstract classes, instantiating them with random weights and constant bias. These are needed to initialize the networks bias nodes and edge weigths before training. The relevant methods can be found in the files ConstantBias.py and RandomWeight.py.

Task 3 demands that you implement the loss function needed to compute the deltas. In this case we use the Euclidian (or squared error) loss (see slide 889). Strictly speaking we only need the derivative but you are asked here to implement both the loss function itself and its derivative. The relevant file is EuclidianLoss.py

Task 4: we want to use a separate activation function for the output layer. So here your task is to implement a linear activation function. Again we need the function itself and its derivative in the file LinearActivation.py

Finally, Task 5 and Task 6 ask you to implement the three subalgorithms of backpropagation learning: forward-propagation of inputs, backward-propagation of deltas and updating of weights. As the framework treats networks modularly as combinations of input-, output- and arbitrarily many fully-connected hidden layers, you have to implement each of these subalgorithms also modularly by using appropriate functions for each kind of layer. The input layer is just another fully connected layer, so you have to implement the functions backward, forward and updateWeightsAndBias in FullyConnected.py and OutputLayer.py, respectively.


Correct.

Have you taken the bias into account?


Thank you for your response.

  1. No I have not taken the bias into account because I did not know how to do it. Should it just be a threshhold for each of the output values? Something like this?
if bigger than bias then return value else return 0

This would not explain my values being off so little. For example the first output of the testcase that fails: Expected: 0.3884727 Calculated: 0.4300842114019795

  1. Also regarding the bias: The last subtask asks us to implement the function updateWeightsAndBias(). I did not find a slide mentioning how to update the bias.

  2. The inputBlob in updateWeightsAndBias() is the same that is put into forward()?

  3. Is it possible to get points for this assignment if some of the tests fail?

Thank you in advance!


The weights do not contain w_0,j from the slides, i.e. the bias node. Instead this is what you implemented in task 2 of the assignment (ConstantBias). If you check the init constructor of the FullyConnected class, you can see that it takes a BiasFiller. For the tests, in the file TestAll.py, you can see that in line 64, the fully connected layer is initiated with the constant bias you implemented („self.impl.cb()“). So you want to extract the correct bias-weight (hint: the bias blob has a getValue method…) and process it in the right way (the details of which I leave to you to solve).

Again, the bias weight is just w_0 in the slides. So the same update-algorithm applies to it as to all the other weights.

That depends ^^. The weights are updated „forward“ in this implementation (nothing hinges on this order though, „backward“ or any other order would also be possible as the deltas have been precomputed.). So the initial InputBlob both for forward() updateWeightsAndBias() is the input data and after that it is the result of the forward function applied to the previous layer. Also look at TestAll.py and Network.py for the exact workings of the code.

Yes, cf. the assignment sheet.


Thank you again. I got it to work now :smiley: