
Bayesian-based iterative method of image restoration. Maximum likelihood reconstruction for emission tomography. Independent component representations for face recognition. An information maximization approach to blind separation and blind deconvolution. Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. The latent semantic analysis theory of knowledge. Introduction to Modern Information Retrieval (McGraw-Hill, New York, 1983). The “wake-sleep” algorithm for unsupervised neural networks. Experiencing and perceiving visual surfaces. Least squares formulation of robust non-negative factor analysis. Unsupervised learning by convex and conic coding.
THE LEARNING FACTORY PALMER MA CODE
Emergence of simple-cell receptive field properties by learning a sparse code for natural images. The Handbook of Brain Theory and Neural Networks 895 –898 (MIT Press, Cambridge, MA, 1995 ). What is the goal of sensory coding? Neural Comput. High-Level Vision: Object Recognition and Visual Cognition (MIT Press, Cambridge, MA, 1996). Recognition-by-components: a theory of human image understanding. Recognition of objects and their component parts: responses of single units in the temporal cortex of the macaque. Hierarchical structure in perceptual representation. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints.

This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. But little is known about how brains or computers might learn the parts of objects. Is perception of the whole based on perception of its parts? There is psychological 1 and physiological 2, 3 evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations 4, 5.
