Register

Moving Away from Dual-Encoder Setup - New Findings + An Interactive Demo!

CireNeikual
2 weeks, 4 days ago
Hi all,

So this last month there were two major developments for AOgmaNeo.

First, there is a new interactive demo hosted on the Ogma website that you can try. I compiled AOgmaNeo with Emscripten (WebAssembly), and it runs pretty nicely in the browser. The demo is a real robot that was then turned into an web demo by learning a "simulator" for it with AOgmaNeo (simply by observing its response to motor commands through a camera). It's a fun demo that showcases the world-modeling capabilities of AOgmaNeo, give it a try! I called it "real2sim", a reversal of the more common "sim2real" paradigm in machine learning. It still uses the dual-encoder setup discussed in previous blog posts, but we now find that that is not needed, leading to the next point.

Second, new findings show that the dual-encoder setup is actually not strictly necessary. With the inclusion of a new "importance" setting for each IO layer, one can now manually set how much the hierarchy pays attention to certain inputs. This was previously done automatically by the error-driven encoder, but a simple manual setting is more general and can perform tasks the error-driven encoder could not. So, I have released a new version of AOgmaNeo that goes back to the faster single-encoder setup, but included a new function called "setImportance" (there is also "getImportance") that allows the user to control how important an input is to the encoders.

Thinking about what's next, I think I may want to change the allocator used in AOgmaNeo. The memory it uses is static (minus some minor things in the interface), and is heap-allocated when init is called. I feel like I can just use a memory arena here, although it will probably not make much difference it feels like a cleaner solution. I am also working on a new branch that introduces a new algorithmic optimization I call "the topology optimization". This new optimization also has the side benefit of being more cache friendly, due to the simpler memory access pattern.

That's all for now!

Log in to comment