Here is our first update!

We have been working on some demos for AOgmaNeo. The two currently of most interest to us are:

- A new version of the "learning to walk faster" demo with a different (custom) quadruped robot with brushless motors (and of course the latest version of AOgmaNeo)
- A robotic rat that must solve classic rat maze tasks (made of cardboard). Here we are using a new version of our "smallest self-driving car" as the "rat"

These are progressing well. We are also of course experimenting with new things w.r.t. the AOgmaNeo software itself. For instance, we have discovered a new encoder that exploits the topology of the input by using 1D distributed self-organizing maps. We have researched topology-preserving encoders before, but this is the first time we get one to work efficiently.

The idea is to have each column be a self-organizing map (1D). Each column also has a "priority" assigned to it (can be assigned randomly). First the SOMs with the highest priority activate and learn from the current input. Then, their combined reconstruction is subtracted from the input, and the columns with the next lower priority then activate and learn. This repeats until all columns are active.

With this method, we can adapt the classic SOM to a CSDR encoding. In addition, we found that one no longer needs a weight from each hidden cell to each input cell in its receptive field. Instead, one weight per input column is sufficient, where the column index is scaled into the [0, 1] range and compared to the SOM weight. We can do this because SOMs preserve topology, meaning similar input cell indices will result in similar output cell indices.

This new encoder addresses two main issues:

- The issue of "dead units" (unused cells)
- The issue of high memory usage (since we only need one byte per input column now, not several floats)

AOgmaNeo also runs even faster than before when using this new encoder, presumably since the new per-column weights are more cache friendly (and less memory needs to be accessed).

However, there are currently still some downsides:

- Seems to learn a bit slower than the original (ESR) encoder
- Performs a bit worse on certain tasks

We will see if we can rectify these, but overall it seems to be on the better end of encoders we have developed.

Concluding with a shot of the rat robot!