Since the last post we have been performing tons of experiments with various improvements to the dual-encoder setup.
Most didn't work, but some made it into the upcoming version of AOgmaNeo.

Importantly, there is now a guide in the AOgmaNeo repository, that provides a brief overview of what AOgmaNeo is and what it does.
It doesn't have code usage yet though, instead it is describes the algorithm. For code usage, the examples are still the main resource at the moment.

We also trained reinforcement learning (RL) agents in the DonkeyCar simulator (Website), which drove around the track quite nicely. Here is the "imagination" of the RL agent creating its own little simulation of the environment:

Finally, we made a Ludum Dare 48 entry that uses AOgmaNeo to control enemies. There wasn't enough time to really get it working well (the creature generation often created immovable creatures), but it was fun regardless! Ludum Dare Link