Hello!

Recently I have been really buckling down on getting error-driven encoder/decoder pairs to work. There are currently two variants that seem promising - one that is similar to what I already had but with some minor (but important) modifications, and the other that uses feedback alignment (paper here).

Error-driven encoders/decoders promise much better compression than the older ones, as they only compress what is needed instead of everything.

With respect to AOgmaNeo as a whole, some minor changes include slightly reduced memory consumption (useful for embedded devices) by changing the way the receptive fields of columns project onto other layers.

I also plan on working on some more teaching resources, something a bit more simple than the old whitepaper.

Some upcoming demos:
- AOgmaNeo-based Visual SLAM - allows us to map the environment for robots using vision only, by using AOgmaNeo's learned representations to identify locations. Uses an omni-directional drive robot.
- Further experiments on the Lorcan Mini robot - we need to go faster - also trying to integrate more motions into a single AOgmaNeo hierarchy.

Hopefully more interesting news soon - research into unorthodox DIY machine learning systems such as AOgmaNeo can be quite sporadic!