Random Projection and the Assembly Hypothesis
Santosh Vempala (Georgia Institute of Technology)
It is well-known that random linear features suffice to approximate any kernel, and hence Random Projection (RP) provides a simple and efficient way to implement the kernel trick. It results in sample complexity bounds that depend only on the separation between categories for supervised learning. But how does the brain learn? It has been hypothesized, since Hebb, that the basic unit of memory and computation in the brain is an _assembly_ , which we interpret as a sparse distribution over neurons that can shift gradually over time. Roughly speaking, there is one assembly per "memory", with a hierarchy of assemblies ("blue", "whale", "blue whale" and "mammal" are all assemblies). How are such assemblies created, associated and used for computation? RP, together with inhibition (only the top k coordinates survive and the rest are zeroed) and plasticity (synapses between neurons that fire within small intervals get stronger), leads to a plausible and effective explanation: A small number of repeated (recurrent) applications of the RP&C (random projection and cap) primitive leads to a stable assembly under a range of parameter settings. We explore this behavior and its consequences theoretically (and to a modest extent, in simulations).
Attachment | Size |
---|---|
Random Projection and the Assembly Hypothesis | 2.26 MB |