AOgmaNeo»Blog
CireNeikual
Hi,

Since the last post, AOgmaNeo has had several important updates - most notably, it now uses a dual-encoder setup.
This means that each layer in the hierarchy contains two encoders, one that is updated by minimizing reconstruction error w.r.t. the input (this one is generative), and another that is updated to minimize prediction errors (this one is discriminative).

Individually these encoders both had problems - when one improved upon some task, the other would fail. So, I decided to just combine these two seemingly complementary encoders, into the new dual-encoder setup.

It is a bit slower than it was before, but it performs a lot better on the tasks I have tested. The API has had some functions renamed, but the general usage remains the same more or less.

The user manual is also nearing completion, so that should be released as well soon - it explains things a bit more gently than before. Hopefully this makes these posts more accessible to a general audience as well!
CireNeikual
Hello!

Recently I have been really buckling down on getting error-driven encoder/decoder pairs to work. There are currently two variants that seem promising - one that is similar to what I already had but with some minor (but important) modifications, and the other that uses feedback alignment (paper here).

Error-driven encoders/decoders promise much better compression than the older ones, as they only compress what is needed instead of everything.

With respect to AOgmaNeo as a whole, some minor changes include slightly reduced memory consumption (useful for embedded devices) by changing the way the receptive fields of columns project onto other layers.

I also plan on working on some more teaching resources, something a bit more simple than the old whitepaper.

Some upcoming demos:
- AOgmaNeo-based Visual SLAM - allows us to map the environment for robots using vision only, by using AOgmaNeo's learned representations to identify locations. Uses an omni-directional drive robot.
- Further experiments on the Lorcan Mini robot - we need to go faster - also trying to integrate more motions into a single AOgmaNeo hierarchy.

Hopefully more interesting news soon - research into unorthodox DIY machine learning systems such as AOgmaNeo can be quite sporadic!
CireNeikual
Hello,

Time for another update!

Since the last blog post, we have performed many new experiments with different encoders, decoders, recurrent versions, and flatter hierarchies. Out of these, the best new systems are:

- New encoder - single-byte weights for ESR (exponential sparse reconstruction) encoder using a few re-scaling tricks. Great for Arduino!
- New reinforcement learning decoder that performs conservative Q learning.

The latter in particular is quite nice to have. Previously, we used a type of ACLA algorithm (Actor-Critic Learning Automaton) to perform reinforcement learning. It worked well, but it had some downsides. For instance, the "passive learning" ability of this decoder was basically a hack, as it couldn't properly learn from the rewards it was provided passively, only the actions taken. It also did not function well with epsilon-greedy exploration.

We have tried Q-learning multiple times before, but this time we found the right method of updating the Q values with sparse binary inputs incrementally. We use a combination of advantage learning (increases action gap) along with a simple way of performing conservative Q-learning. We also used N-step Q-learning to help smooth things out.

The conservative Q-learning removes the need to tell the system when it should "mimic" the actions it is given as opposed to learning its own. Instead, it can now learn completely passively and actually make use of the rewards it is provided.

Oh yeah, we also have two new demos since the last post!

In this demo, we trained our Lorcan Mini robot to walk with reinforcement learning using only the IMU forward acceleration as a reward signal:



And in this one, we stored a minute-long video along with its audio approximately in an AOgmaNeo hierarchy:


CireNeikual
Hello,

Time for another update!

We have added a new feature to the master branch of AOgmaNeo - the ability to supply "do not use" inputs.
These are supplied by just setting a column index to -1. They signal to the hierarchy that you do not want to learn or activate from the input. The hierarchy will however still provide a prediction for this column.

This new feature can be used for cases where data is missing or known to be useless, and also allows AOgmaNeo to predict what those "missing" values should be.

We have also added new serialization features, which allow in-memory serialization of both network state and all weights. This feature allows one to perform multi-step prediction, by "checkpointing" the current state, predicting a few more steps, and then reverting to the previous state. This will keep updating the weights on new information as well, as these will not be reset (not part of the "state").

Aside from these features, we have of course also been researching encoders and decoders. An interesting encoder currently is an error-driven encoder, which only learns when prediction errors occur from the decoder. This allows the encoder to discard information that is not useful for the final prediction. Currently, this encoder still has several problems, and performs worse than what is in the master branch. However, with some changes it may eventually overtake the existing encoder.

Finally, we have continued working on our demos, including a fun new simulated robot that jumps over hurdles. More on that and our real robotic demos soon (hopefully)!
CireNeikual
Here is our first update!

We have been working on some demos for AOgmaNeo. The two currently of most interest to us are:

- A new version of the "learning to walk faster" demo with a different (custom) quadruped robot with brushless motors (and of course the latest version of AOgmaNeo)
- A robotic rat that must solve classic rat maze tasks (made of cardboard). Here we are using a new version of our "smallest self-driving car" as the "rat"

These are progressing well. We are also of course experimenting with new things w.r.t. the AOgmaNeo software itself. For instance, we have discovered a new encoder that exploits the topology of the input by using 1D distributed self-organizing maps. We have researched topology-preserving encoders before, but this is the first time we get one to work efficiently.

The idea is to have each column be a self-organizing map (1D). Each column also has a "priority" assigned to it (can be assigned randomly). First the SOMs with the highest priority activate and learn from the current input. Then, their combined reconstruction is subtracted from the input, and the columns with the next lower priority then activate and learn. This repeats until all columns are active.

With this method, we can adapt the classic SOM to a CSDR encoding. In addition, we found that one no longer needs a weight from each hidden cell to each input cell in its receptive field. Instead, one weight per input column is sufficient, where the column index is scaled into the [0, 1] range and compared to the SOM weight. We can do this because SOMs preserve topology, meaning similar input cell indices will result in similar output cell indices.

This new encoder addresses two main issues:

- The issue of "dead units" (unused cells)
- The issue of high memory usage (since we only need one byte per input column now, not several floats)

AOgmaNeo also runs even faster than before when using this new encoder, presumably since the new per-column weights are more cache friendly (and less memory needs to be accessed).

However, there are currently still some downsides:

- Seems to learn a bit slower than the original (ESR) encoder
- Performs a bit worse on certain tasks

We will see if we can rectify these, but overall it seems to be on the better end of encoders we have developed.

Concluding with a shot of the rat robot!