Instantaneously trained neural networks

Instantaneously trained neural networks are feedforward artificial neural networks that create a new hidden neuron node for each novel training sample. The weights to this hidden neuron separate out not only this training sample but others that are near it, thus providing generalization.[1][2] This separation is done using the nearest hyperplane that can be written down instantaneously. In the two most important implementations the neighborhood of generalization either varies with the training sample (CC1 network) or remains constant (CC4 network). These networks use unary coding for an effective representation of the data sets.[3]

This type of network was first proposed in a 1993 paper of Subhash Kak.[1] Since then, instantaneously trained neural networks have been proposed as models of short term learning and used in web search, and financial time series prediction applications.[4] They have also been used in instant classification of documents[5] and for deep learning and data mining.[6][7]

As in other neural networks, their normal use is as software, but they have also been implemented in hardware using FPGAs[8] and by optical implementation.[9]

  1. ^ a b Kak, S. On training feedforward neural networks. Pramana, vol. 40, pp. 35-42, 1993 [1]
  2. ^ Kak, S. New algorithms for training feedforward neural networks. Pattern Recognition Letters 15: 295-298, 1994.
  3. ^ Kak, S. On generalization by neural networks, Information Sciences 111: 293-302, 1998.
  4. ^ Kak, S. Faster web search and prediction using instantaneously trained neural networks. IEEE Intelligent Systems 14: 79-82, November/December 1999.
  5. ^ Zhang, Z. et al., TextCC: New feedforward neural network for classifying documents instantly. Advances in Neural Networks ISNN 2005. Lecture Notes in Computer Science 3497: 232-237, 2005.
  6. ^ Zhang, Z. et al., Document Classification Via TextCC Based on Stereographic Projection and for deep learning, International Conference on Machine Learning and Cybernetics, Dalin, 2006
  7. ^ Schmidhuber, J. Deep Learning in Neural Networks: An Overview, arXiv:1404.7828, 2014 https://arxiv.org/abs/1404.7828
  8. ^ Zhu, J. and G. Milne, Implementing Kak Neural Networks on a Reconfigurable Computing Platform, Lecture Notes in Computer Science Volume 1896: 260-269, 2000.
  9. ^ Shortt, A., J.G. Keating, L. Moulinier, C.N. Pannell, Optical implementation of the Kak neural network, Information Sciences 171: 273-287, 2005.