New Immissions/Updates:
boundless - educate - edutalab - empatico - es-ebooks - es16 - fr16 - fsfiles - hesperian - solidaria - wikipediaforschools
- wikipediaforschoolses - wikipediaforschoolsfr - wikipediaforschoolspt - worldmap -

See also: Liber Liber - Libro Parlato - Liber Musica  - Manuzio -  Liber Liber ISO Files - Alphabetical Order - Multivolume ZIP Complete Archive - PDF Files - OGG Music Files -

PROJECT GUTENBERG HTML: Volume I - Volume II - Volume III - Volume IV - Volume V - Volume VI - Volume VII - Volume VIII - Volume IX

Ascolta ""Volevo solo fare un audiolibro"" su Spreaker.
CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Hierarchical Temporal Memory - Wikipedia, the free encyclopedia

Hierarchical Temporal Memory

From Wikipedia, the free encyclopedia

The factual accuracy of this article or section may be compromised due to out of date information.
Please see the relevant discussion on the talk page
 This article or section needs to be updated.
Parts of this article or section have been identified as no longer being up to date.
Please update the article to reflect recent events, and remove this template when finished.
This article has been nominated to be checked for its neutrality.
Discussion of this nomination can be found on the talk page.

Hierarchical Temporal Memory (HTM) is a machine learning model developed by Jeff Hawkins and Dileep George of Numenta, Inc. that models some of the structural and algorithmic properties of the neocortex as Bayesian networks.

Whilst criticized by the AI community as rehashing existing material (for example, in the December 2005 issue of the Artificial Intelligence journal), the model is quite novel in proposing functions for cortical layers. As such it is related to similar work by Thomas Poggio and David Mumford amongst others.

The following text is from Numenta's marketing materials:

Contents

[edit] Overview

Like most pattern recognition methods, HTMs are not programmed and do not execute different algorithms for different problems, and instead, “learn” how to solve problems by exposure to data.

HTMs are organized as a tree-shaped hierarchy of nodes, where each node implements a common learning and memory function. HTMs store information throughout the hierarchy in a way that models a problem domain. All objects in the domain have structure. If this structure is hierarchical in both space and time then an HTM model may capture and model the structure of the domain.

[edit] Capabilities

  • The foremost capability of pattern recognition methods including HTMs is its ability to discover the causes underlying sensory data. Causes are persistent and recurring structures in the world. The concept of “cause” includes language, physical objects, ideas, the laws of physics, and the actions of other people.
  • After discovering causes, HTMs can rapidly infer the causes underlying novel inputs. Inference is similar to “pattern recognition”. When an HTM sees a novel input, it determines not only the most likely high-level cause(s) of that input, but also the hierarchy of sub-causes. The HTM system can also be queried to see what cause was most probable.
  • Each node in the network can use its memory of sequences to predict what should happen next. A series of predictions is probably the basis of imagination and directed behavior.

[edit] Similarities to existing AI technologies

HTMs are similar to Bayesian networks; however, they differ slightly from typical Bayesian networks in the way that time, hierarchy, action, and attention are used.

An HTM can be considered a form of Bayesian network where the network consists of a collection of nodes arranged in a tree-shaped hierarchy. Each node in the hierarchy self-discovers a set of causes in its input through a process of finding common spatial patterns and then finding common temporal patterns. Unlike many Bayesian networks, HTMs are self-training, have a well-defined parent/child relationship between each node (which helps to prevent inference loops), inherently handle time-varying data, and allow the introduction of mechanisms for covert attention.

[edit] Implementation

The first implementation of Numenta's HTM is on Linux-based computers. The core algorithms will be programmed in C++ for maximizing runtime performance. Once the Numenta product is released, developers will either be able to use the existing algorithms, or program their own in C++ or Python.

[edit] Derivations

HTM was inspired by the anatomy of the neocortex. Therefore, there is a detailed mapping between HTM and a model of the neocortex. A partial description of this model is covered in chapter six of the book On Intelligence (Times Books, 2004). However, it is not necessary to know the biological background of HTMs to deploy HTM-based systems.

[edit] Theory

This section is a brief summary of the paper "Hierarchical Temporal Memory - Concepts, Theory, and Terminology"PDF (149 KiB) by Jeff Hawkins and Dileep George, Numenta Inc., 2006-05-17 (see References).

[edit] The role of HTMs

HTMs perform four basic functions regardless of the particular problem they are applied to. The first two are required, and the latter two are optional. They are:

  1. Discover causes in the world. The HTM receives spatio-temporal patterns coming from one or more sensors. The sensory data must be a topologically arrayed collection of inputs, where each input measures a local and simple quantity. At first, the HTM has no knowledge of the causes (underlying objects and effects) in the world, but through a learning process, it “discovers” what the causes are. All HTMs first learn about the small and simple causes in their world. Large HTMs, when presented with enough sensory data, should be able to learn high level, sophisticated causes. Discovering causes is also a necessary precursor for inference, the second capability of HTMs.
  2. Infer causes of novel input. Given a novel sensory input stream, an HTM will “infer” which known causes are likely to be present in the world at that moment. The result would be a distribution of probabilities across all the learned causes, which represent the 'beliefs' of the HTM. The current inferred beliefs of an HTM can be read from the system to be used elsewhere. Alternatively, the current beliefs can be used internally by the HTM to make predictions or to generate actions. HTMs infer causes even while learning (albeit inferring poorly at first).
  3. Make predictions. By combining memory of likely sequences with current input, each node has the ability to make predictions of what is likely to happen next. An entire HTM, being a collection of nodes, also makes predictions. In addition to other uses, prediction is at the heart of how HTMs can direct actions, the fourth and last capability of HTM.
  4. Use predictions to direct actions. If an HTM is attached to a system which physically interacts with a world, where the system can move its sensors through its world and/or manipulate objects in its world, the HTM can learn to generate complex goal-oriented behavior. The HTM forms representations of the behaviors of the system it is attached to, and importantly, it learns to predict its activity. Next, through an associative memory mechanism, the HTM-based representations of the built-in actions are paired with the mechanisms creating those actions. After this associative pairing, whenever the HTM invokes the internal representation of an action, it might cause the action to occur. If the HTM predicts that an action will occur, it can cause that action to take place earlier than expected. The HTM is now in a position to direct actions. By stringing together sequences of these simple actions, it may be able to create more complex goal-oriented actions. To do this, the HTM performs the same steps it does when generating a string of predictions and 'imagining' the future – however, instead of just imagining the future, the HTM strings together the built-in actions to make them actually happen.

[edit] The discovery and inference of causes by HTMs

HTMs are structured as a hierarchy of nodes, where each node is performing the same learning algorithm. Sensory data enter at the bottom. Exiting the top is a vector in which each element of the vector represents a potential cause of the sensory data. Each node in the hierarchy performs the same function as the overall hierarchy. That is, each node looks at the spatio-temporal pattern of its input and learns to assign (discover) the cause(s) of its input patterns.

An HTM starts with a fixed number of possible causes, and through training, it learns to assign meaning to them. The nodes do not “add” causes as they are discovered, instead, over the course of training the meanings of the outputs gradually change. This happens at all levels in the hierarchy simultaneously.

[edit] Nodal operation

The first step in the basic operation of each node is to assign the node’s input pattern to one of a set of quantization points (each representing some common spatial pattern of inputs). In this first step, the node calculates how close (spatially) the current input is to each of its quantization points and assigns a probability to each quantization point.

In the second step, the node looks for common sequences of these quantization points. The node represents each sequence by a sequence point. As input patterns arrive over time, the node assigns to these sequence points a probability that the current input is part of that sequence. The set of these sequence points is one output of the node, and is passed up the hierarchy to the parent(s) of the node.

A node can also send information to its children; these outputs going down the hierarchy represent the distribution over the quantization points, whereas the outputs going up the hierarchy represent the distribution over the sequence points.

[edit] The importance of hierarchy

There are four reasons why using a hierarchy of nodes is important:

  1. Shared representations lead to generalization (reduced training time) and storage efficiency (reduced memory requirements). HTMs can require lots of training and large amounts of memory, but unlike some other methods that have been used for pattern recognition, they do not suffer exponential problems of scale. The hierarchy in HTMs helps scaling because, as in the [NeoCognitron], causes inferred in lower levels of the hierarchy are shared among higher-level nodes, which significantly reduces the amount of time and memory required to learn higher-level causes. This also may provide a means for HTMs to generalize from previously learned sequences.
  2. The hierarchy of HTM matches the spatial and temporal hierarchy of the real world. The objects in the world, and the patterns they create on the sensory arrays, generally a have hierarchical structure. HTMs exploit this structure by first looking for nearby correlations in sensory data. As the hierarchy is ascended, the HTM continues this process, but now it looks for correlations of nearby causes from the first level, then correlations of nearby causes from the second level, and so on. Each node in the hierarchy works with both temporal and spatial data and, therefore, as information is passed up the hierarchy of the HTM, each node covers larger areas of sensory space, and longer periods of time.
  3. Belief propagation-like techniques help to ensure that all nodes quickly reach mutually consistent beliefs. HTMs use a variation of belief propagation for inference. The sensory data imposes a set of 'beliefs' at the lowest level in an HTM hierarchy, and by the time the beliefs propagate to the highest level, each node in the system represents a belief that is mutually consistent with all the other nodes. The highest level nodes show which highest level causes are most consistent with the inputs at the lowest levels. Whenever the state of the network changes, whether due to sensory changes or internal prediction, the network quickly settles on a set of beliefs which are mutually consistent.
  4. The hierarchical representation offers a mechanism for covert attention. The hierarchy in an HTM provides a mechanism for focussed attention. Each node in the hierarchy sends beliefs to other nodes higher in the hierarchy. If a means to switch these pathways on and off is provided, the 'perception' of the HTM can be controlled; the most probable belief at the top of the hierarchy will reflect the causes in a limited part of the input space.


[edit] The essentiality of time for learning

Time-varying inputs are necessary to learn. Because each node learns common sequences of patterns, the only way a node can do this is if it is presented with sequences of patterns over time.

[edit] Pooling

Pattern recognition is a “many-to-one” mapping problem, i.e., many input patterns get mapped to each category. This task of many-to-one mapping, hereinafter referred to as 'pooling', is something that every node in an HTM must perform if the hierarchy as a whole is to infer causes. An HTM uses two mechanisms for pooling:

  1. Spatial pooling is a pooling mechanism based on spatial similarity. In this case, an unknown pattern is taken and its closeness to each quantization point is determined (see above). Two patterns that are sufficiently similar are considered to be the same and are pooled into the same quantization point. This form of pooling is a weak one and not sufficient on its own to solve most inference problems.
  2. Temporal pooling is the learning of sequences. Here a node maps many quantization points to a single sequence. This method of pooling is more powerful because it allows arbitrary mappings. It allows a node to group together different input patterns that have no spatial similarity.

Each node in the hierarchy does both spatial and temporal pooling. Therefore time-varying inputs are necessary to learn the causes in the HTM's world.

[edit] HTM-based system design

Like most machine learning tools, learning how to design an HTM-based system is said by Numenta to be similar in difficulty as learning how to write a complex software program. Anyone can learn how, but when learning from scratch, there is a lot to learn.

Numenta claims to be writing an implementation that will make it easy for engineers and scientists to experiment with HTMs and develop HTM-based applications. In addition to documenting the platform and tools, Numenta has said it will make available the source code for many parts of the implementation. This source code access should allow developers to better understand how Numenta’s tools work and allow for extensions.

[edit] Points of interest

In addition to the points discussed earlier in this article, the following considerations apply to using an HTM of Numenta's design:

  • The design and capacity of a particular HTM must be matched to the problem being addressed and the available computing resources. Considerable tuning may be required to get optimal performance.
  • When training a new HTM from scratch, the lower-level nodes become stable before the upper-level nodes, reflecting the common sub-properties of causes in the world. A designer of an HTM can disable learning for lower-level nodes after they become stable, thus reducing the overall training time for a given system.
  • If an HTM is exposed to new objects that have previously unseen low-level structure, it will take much longer for the HTM to learn the new object and to recognize it.
  • When designing an HTM system for a particular problem, it is necessary that the problem space (and the corresponding sensory data) has hierarchical structure. Data must be presented to the HTM so that adjacent sensory input data are likely to be correlated in space and time. The design of an HTM's hierarchy should reflect the likely correlations in its world.
  • Some HTM designs will be more efficient than others for a given problem. An HTM that can discover more causes at low levels of the hierarchy will be more efficient and better at discovering higher-level causes than an HTM that discovers fewer causes at low levels. Designers of some HTM systems will need to spend time experimenting with different hierarchies and sensory data arrangements trying to optimize both the performance of the system and its ability to find high level causes.
  • HTMs are claimed to be very robust; any reasonable configuration will work – that is, find causes – but the HTM's performance and ability to find high-level causes will be determined by the node-to-node hierarchical design of the HTM, the way the sensory data is presented to the HTM, and how the sensory data is arranged relative to the low-level nodes.


  • It is helpful for the designer of HTM-based systems to have a basic understanding of Bayesian networks and belief propagation.
  • Most designers of HTM-based systems need not understand the details of the learning algorithms used by the HTM. They can specify the size of the nodes, the dimensions of their inputs and outputs, and the overall HTM configuration without worrying about the details of the learning algorithms within the nodes.
  • Some designers, especially early on, will want to understand the learning algorithms and perhaps modify them. They may want to improve their performance, experiment with variations, and modify the algorithms to tune them to particular types of problems.

[edit] Criticism

HTM has been accused by the AI community of being nothing new, merely a rehash of many existing ideas, and without giving credit to their original authors. This is perhaps due to Hawkin's non-academic background and lack of peer reviews. The case is similar to that of Stephen Wolfram who similarly has made enough money in industry to bypass conventional academic publishing: reviews of both authors appeared back-to-back in the December 2005 issue of Artificial Intelligence.

[edit] See also

[edit] References

[edit] External links

[edit] Official

[edit] Other

In other languages

Static Wikipedia (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -

Static Wikipedia 2007 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -

Static Wikipedia 2006 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu

Static Wikipedia February 2008 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu