Biswas, Nripendra N and Murthy, TVMK and Chandrasekhar, M (1991) IMS Algorithm for Learning Representations in Boolean Neural Networks. In: 1991 IEEE International Joint Conference on Neural Networks, 18-21 November, Singapore, Vol.2, 1123-1129.
A new algorithm for learning representations in Boolean neural networks, where the inputs and outputs are binary bits, is presented. The algorithm has become feasible because of a newly discovered theorem which states that any non-linearly separable Boolean function can be expressed as a convergent series of linearly separable functions connected by the logical OR (+) and the logical INHIBIT (-) operators. The formation of the series is carried out by many important properties exhibited by the implied minterm structure of a linearly separable function. The learning algorithm produces the representation much faster than the back propagation, and unlike the latter does not encounter the problem of local minima. It also successfully separates a linearly separable function and obtains the perceptron solution in the presence of a spoiler vector, a situation where back propagation is guaranteed to fail.
|Item Type:||Conference Paper|
|Additional Information:||Copyright 1990 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.|
|Department/Centre:||Division of Electrical Sciences > Electrical Communication Engineering|
|Date Deposited:||29 May 2006|
|Last Modified:||19 Sep 2010 04:27|
Actions (login required)