"Do not use" inputs, multi-step predictions and error-driven encoders

1 month, 3 weeks ago

Time for another update!

We have added a new feature to the master branch of AOgmaNeo - the ability to supply "do not use" inputs.
These are supplied by just setting a column index to -1. They signal to the hierarchy that you do not want to learn or activate from the input. The hierarchy will however still provide a prediction for this column.

This new feature can be used for cases where data is missing or known to be useless, and also allows AOgmaNeo to predict what those "missing" values should be.

We have also added new serialization features, which allow in-memory serialization of both network state and all weights. This feature allows one to perform multi-step prediction, by "checkpointing" the current state, predicting a few more steps, and then reverting to the previous state. This will keep updating the weights on new information as well, as these will not be reset (not part of the "state").

Aside from these features, we have of course also been researching encoders and decoders. An interesting encoder currently is an error-driven encoder, which only learns when prediction errors occur from the decoder. This allows the encoder to discard information that is not useful for the final prediction. Currently, this encoder still has several problems, and performs worse than what is in the master branch. However, with some changes it may eventually overtake the existing encoder.

Finally, we have continued working on our demos, including a fun new simulated robot that jumps over hurdles. More on that and our real robotic demos soon (hopefully)!
Log in to comment