User:Graeme E. Smith/Collections/Model Series/Datamining/The Bottleneck
The Bottleneck
[edit | edit source]
In the 1950's George A. Miller, an American Psychologist did a review of Digit Span Tests, and determined that the memory had somewhere around locations in memory. Since then we have found a larger range in English, and an even larger range in Chinese, which suggests he may have set his error bars too tight under the impression that the difference was in his measurements when the difference was actually in the range of digits being memorized which seems to change over age, and according to factors such as how many syllables there are in the digit name.
This extreme limitation on throughput in short term memory baffled scientists, and they developed their own name for it, they called it the "Bottleneck" because it was an unexplained narrowing of the architecture of the brain found in short-term memory.
By the late 1990's a consensus seemed to emerge, that the bottleneck was:
- A Serial Dependency, probably caused by a search
- A Timing Related Limitation, related to the Rehearsal of elements kept in the Working Memory
- Sensitive to Phonological Factors such as the number of syllables in the language of the Owner of the Memory.
Location of the Bottleneck
[edit | edit source]
An interesting factor in locating the bottleneck, is that it doesn't impact implicit memory, but is already in place by the time the memory is transferred to working memory. This implies that the Bottleneck lies between implicit and explicit memory, Which led to my recognition that it might be related to the conversion between implicit and explicit memory.
To explain this insight, let us look at the nature of implicit memory. Because of the nature of Content Address-ability, one of the issues clearly stated in David Marr's Theory on Cerebral Cortex is the need to deal with redundant data coming out of a Content Addressable Memory.
In essence Content Addressability is created by the recognition of patterns of stimulation, associated with a particular stimulus, by Marr's content sensitive elements he called CODONS.
Since these patterns may be recognized by multiple CODONS, there is no way to eliminate multiple outputs for the same data. I call this the voluntary type memory, not because there is any will involved but because each codon volunteers its own solution, given a stimulus. This is like the annoying tendency recently found in software where alternate endings for an entered statement are offered automatically, often obscuring the text being written. It annoys me no end, to be writing, and to hit the enter key, only to find one of these endings replacing the text I was actually writing. In just such a way, the mind offers up many wrong answers, automatically in its implicit memory, and because it is content addressable has no way to isolate them at this stage of memory.
Qualia
[edit | edit source]
The result is that Implicit memory creates a Field of Data called a Quale that has no distinct meaning because it includes everything that was implied by the content of its stimulus. Because it is created by a network, it is a phenomenal element it can't be further subdivided if only because there is no addressing mechanism in the implicit memory whereby specific memories can be factored out of the Quale. It can, however be filtered as to indicate a specific Point of View, or Context, what is needed however is another neural network that somehow determines what contexts fit together, and projects on the contexts, some tag, that links them together, creating a binding of information that overcomes the stovepipe-like natures of the individual senses.
It is this filtered quale, that is presented to the bottleneck. The result is an informationally rich but organizationally poor memory element called in some lexicons a Functional Cluster, because it is represented by a distributed cluster of neurons that are all synchronized on a specific frequency. If theories about the nature of the bottleneck as a search that has something to do with conversion of the implicit memory to an explicit memory are true, then what needs to happen, is we need to represent the functional cluster, as a collection of Mini-column addresses, in order to convert it to an explicit form.
I want to make a distinction here between a Naive System, and an Experienced System. In a Naive system, there is no way to directly map a memory from implicit memory to explicit memory without testing each mini-column to determine if it is part of the functional cluster. In an experienced system, we can take short cuts, like using a memory of a similar functional cluster to factor out similar elements. Thus when we attempt to deal with conversion of implicit to explicit memory in computing we start talking about Iconic learning, and symbolization. However in a neural network implementation these terms are simply premature, there can only be a Naive System at this point, if only because there is no way to address the similar elements.
Nature of the Bottleneck and of CLUMPS
[edit | edit source]
If this limitation is accepted, then, the nature of the bottleneck search becomes at least partially obvious, there must be a search of mini-column addresses, that somehow compares the neural group activations of the mini-columns against the activations of the functional cluster.
All that is needed is a comparator of some type that monitors whether or not there is a difference in the output of the Quale when a particular mini-column is activated. Given such a device, I call the bottleneck device, a collection of mini-column activations can be designated as the translation of a particular implicit memory into a particular explicit memory. We call this collection of mini-column addresses a CLUMP.
Given such a CLUMP we can recover a similar implicit memory, simply by presenting the activation pattern of mini-column addresses to the cerebral cortex, via the thalamus, and reading the resulting implicit memories Quale. Once we have such clumps we can edit their mini-column addressing lists, and develop an experience with the contents of the Cerebral Cortex that will allow us to factor Qualia into components that are significant, and thus learn from the implicit memory about our environment in a more meaningful manner. In this case the factoring process happens not with the Quale per se, but with the CLUMP that represents it. Through rehearsal the modified clump, gets not only translated into a new Quale that has more directed content, but also has the opportunity through passing through the bottleneck a second time, to create a new clump that addresses the more directed content, as a separate entity.
Size of Short-Term Memory
[edit | edit source]
Before I leave this subject, it must be noted that each time a memory passes through the bottleneck, it imposes a serial dependency on that memory, and it is this serial dependency that limits the size of the short term memory, if only by affecting how long it takes to rehearse a specific item, such as a digit, and therefore how many digits can be remembered before the working memory degrades beyond recognition of individual memories. This dependency is also dependent on how many passes through the bottleneck isolating a particular memory requires, not the content of the implicit memory, thus phonological factors might play more of a role than the size of the memory in terms of information theory which is what Miller originally tried to use as his determination of the size of the Short Term Memory.