Invariance learning with neural maps

Humans have the ability to quickly and reliably recognize visual objects, regardless of the viewing angle or distance from which the object is observed. For computers, however, this task was long a challenge. Pattern matching algorithms can be used recognize images. But if an object is viewed from a different angle or distance, it generates a completely different light patterns on the retina or in a camera chip (“pixels”). The ability to robustly recognize objects even when there are variations in viewing angle, distance, or lighting conditions is called “invariant object recognition.”

Spatial frequencies in the visual system

The fact that the nerve cells in the primary visual cortex respond selectively to the orientation (angle) of brightness edges has been known since the work of Hubel and Wiesel (1962). It was later discovered that these nerve cells also react to stripe patterns and that their response depends on the distance between the stripes and thus on the “spatial frequency” (the reciprocal of the period length).

Motivation for this AI blog

The topic of “Artificial Intelligence” – AI – will have drastic impacts on all aspects of life in the coming years. It has the potential to take on a significant amount of work for us humans, making us more productive and allowing more time for the enjoyable things in life. However, AI can also destroy lives and limit freedoms, for example, when used for weapons systems or building totalitarian surveillance states. It’s not the AI itself that produces these negative consequences, but rather the human users of these AI systems.