2018 SR&ED Claim

2018 Scientific Research and Experimental Development tax credit claim

T661 - Part 2 – Project Information

Section B – Project Description

Line 242 -What scientific or technological uncertainties did you attempt to overcome - uncertainties that could not be removed using standard practice? (Maximum 350 words)

The project is attempting to develop general purpose learning and thinking software for autonomous robot control. The approach is based on the development of a new artificial neural network (ANN). A key requirement is that the decisions made by the software must be explainable.

Current robot control software is not general purpose. It is specifically developed to function in a predefined environment with predefined tasks to be performed. An example is Google’s self-driving car control software. Learning capability, if present, is limited to this environment. General purpose artificial intelligence (AGI) architectures exist but are not designed for robot control i.e. they are not grounded on sensory input and device activation. Examples include ACT-R, SOAR, CLARION, LIDA, SIGMA and HTM.  These AGI architectures are also applied only in specifically defined environments. The DARPA Robotics Challenge provides good examples of the state of the art robotic control software: https://en.wikipedia.org/wiki/DARPA_Robotics_Challenge.  These architectures make heavy use of stochastic inference which results in behavior that is not explainable.  Another limitation of these architectures is that they do not use the same structures for pattern recognition and motor action control.

This research is developing a new architecture to solve these problems. However there are no simple hierarchical (deep learning) ANNs based on simple binary nodes that are feed-forward, explainable and use reinforcement learning. There are also no ANNs that grow nodes in a hierarchy for both recognizing objects and sequences as well as for learning new action control habits. Even the deep learning pattern recognition ANNs, such as convolutional neural networks, do not add nodes dynamically as they learn.

More specifically the project is attempting to develop a new hierarchical structure of ANN nodes that:

  • Uses binary neurons (binons) with reinforcement learning based on intrinsic motivation.
  • Grows the ANN by adding nodes for parallel (spatial) and sequential (temporal) pattern recognition.
  • Learns continuously - does not separate the training phase from the testing phase.
  • Converts magnitude sensor readings (sub-symbolic) into symbolic stimuli.
  • Learns action habits and integrates them into the ANN.
  • Has explainable behavior.

Line 244 – What work did you perform in the tax year to overcome the scientific or technological uncertainties described in Line 242? (Summarize the systematic investigation) (Maximum 700 words)

Approach: Increase the complexity of the ANN structure and algorithms to process the requirements at greater levels of complexity while still handling the lower complexity features. Run regression tests on already working lower complexity features. Determine its success at learning and thinking based on the observation of its actions in artificial test environments simulated in software and by inspection of its internal memory traces and processes. More details about the research are available at www.adaptroninc.com.

Two pieces of software are being used to validate the research:

A) Hand written digits are being used for object (spatial) pattern recognition using a topological activation structure and binon hierarchy for long term memory of patterns/objects and their associated labels. This treats the two dimensional array of pixels as a one dimensional row of stimuli.

B) Morse code is being used for sequential (temporal) pattern recognition from multi-sensor senses producing magnitude (ratio scale) and discrete (symbolic / nominal) stimulus readings using a hierarchy of binons. This software uses a state and spike driven binon activation approach.

In area A (Handwritten digit recognition):

A.1 – Determined a method for representing the gaps between objects based on a separation ratio. Separation, shape and repeating patterns were then combined to group objects according to Gestalt principles of proximity and similarity.

A.2 – Tried a different repeat grouping process involving overlapping, adjacent, contained, and separated shape patterns to obtain Gestalt grouping principles.

A.3 - Divided pattern recognition into two passes. The first pass recognizes known objects from patterns of parts and the second pass creates new patterns from combinations of familiar parts.

In area B (Morse code recognition):

B.1 – Changed the software from state and spike driven to a functional algorithm and removed the action binons to simplify it for experimenting with temporal edges and when patterns are recognized as objects. Investigated the multiplicity of the relations between patterns to determine when a pattern gets recognized as a part because objects are patterns of parts. This experimentation was done in the temporal pattern recognition for Morse code because sequential / temporal recognition involves just the one forward dependency.

Line 246 – What scientific or technological advancements did you achieve as a result of the work described in line 244? (Maximum 350 words)

  1. The combination of separation patterns, to represent gaps, with patterns for repetition and shape was insufficient to reproduce the Gestalt grouping principles because objects were not being recognized based on their parts.
  1. By dividing pattern recognition into two passes familiar lower complexity patterns of parts are formed into known objects and new patterns can be formed from these objects. This approach allows for object recognition to evolve out of pattern recognition. However, I still need to develop a working algorithm for determining when the parts of an object form an interdependent pattern that can be recognized independent of intensity, size, position, level of complexity and even if parts of the object are obscured.