Home Contact Us
Reference  
News  
Reference - Articles
Technologies/Disciplines

 
 Connectionism - Part 1
This article was published in Direct Access for Canada's Information Systems Professionals. Direct Access is published 25 times a year by Laurentian Technomedia Inc., a unit of larentian Publishing Group.

Prepared By Dr. Mir F. Ali

A new class of artificial intelligence machines, called connectionist models, or neural networks, was revitalized in a new form in the 1980's after suffering from ambiguity for almost 30 years. These models reflect, in very striking ways, some of the properties of natural reason as they use the brain, rather than the logic machine, as a metaphor for the mind. Jeremy Campbell in his book, the Improbable Machine, commented that connectionism is in its infancy, and its future is a question mark. But if it grows up, it will deserve to be called the science of the worldly machine. This article is intended to show that connectionism is growing every day.

It is now accepted that neural networks can be economically incorporated into systems that solve real-world problems. Neural networks are inspired by biological systems where large numbers of neurons - that individually function rather slowly and imperfectly - collectively perform tasks that event he largest computers have not been able to match. They are made of many relatively simple processors connected to one another by variable memory elements whose weights are adjusted by experience.

Neglecting the new discoveries of brain science that contradicted its claim that the mind cannot be studied scientifically, artificial intelligence made the study of the mind respectable again but still divorced it from what was known about how the brain works.

One theory holds that states of mind are merely complex states and operations of a physical device we call the brain; another holds that the mind is radically different from the brain and cannot be understood in physical terms

Connectionism does not require us to believe that the mind is the brain; but it does not suggest that what the brain is, or how it evolved, constrains in interesting and important ways the sort of theories we can reasonably entertain about how the mind works.

If the brain is a parallel device without programs, in which massive amounts of knowledge, implicit in the connections that make up the bulk of the brain's volume, are brought to bear on a problem simultaneously, then this must surely verify our ideas about the whole of mental life, including such high level activities as thinking and reasoning.

Far from “reducing” the mind to the brain, the connectionism is likely to enlarge and enhance our understanding, adding to what we know about the mind, rather than subtracting from it. Connectionism explores the hidden machinery underlying the surface appearance of thinking, remembering and perceiving.

An important class of connectionist models that seems to bear certain resemblance to the anatomy of the brain, memory behaves in a completely different way. There is no central processor, no man on the stage taking messages one by one, working on them and then returning the result to a member of the audience.

Instead, each member of the audience is a processor, receiving messages from other members and acting or not acting on them. Cognition in such a system is global with a vengeance, arising out of somewhat anarchic activity of myriad's of individual units working all at the same time on a problem in concert or in competition and settling down into a solution that may not be logically correct but “feels right” to the system as a whole.
  • Definition:
    Donnectionist systems consist of a network of simple processors called units, linked together in such a way that each unit receives signals from other units (perhaps dozens of them) and sends signals to the same or to other units.

    On its own, a unit is not intelligent. It performs quite simple tasks. It does some elementary arithmetic on the signals it receives and interprets them as a message to transmit, or not to transmit, with a signal of its own.

    When a signal arrives at a unit from elsewhere in the system, the unit multiplies the signal by a number, called a weight, before passing it on to other units. The weight determines whether the signal the unit transmits is weak or strong. A weak signal links a unit to other units loosely, while a strong signal makes tighter connections.
  • Limitations:
    There are certain inherent limits to what connectionist models can do and the limits arise from one of the peculiar strengths of these systems - the fact that knowledge and interpretation are both embodied in one and the same network.

    In traditional artificial intelligence machines, the data structure that represents aspects of the world, and the program that interprets (and in a sense “understands”) the data, are separate and distinct.

    Knowledge is split between these two vehicles, the data and the procedures for looking at the data. That is not the case in a connectionist system, where there is one physical device and it contains that data as well as the interpreter. As a result, everything the system must know about a person, an object or an event in the world must be represented explicitly in the network.
  • Neural Network and Artificial Intelligence (AI):
    Igor Aleksander articulated the difference between neural networks and artificial intelligence by indicating that AI is considered an outlet for a minority of computer scientists, whereas neural computing unites a very broad community, including physicists, statisticians, parallel processing experts, optical technologists, neurophysiologists and experimental biologists.

    The focus of this new paradigm is on the recognition by this diverse community that the brain “computes” in a different way from the conventional computer. The AI paradigm is based on the premise that an understanding of what the brain does represents a true understanding only if it can be explicitly expressed as a set of rules. These rules, in turn, can be run on a computer that subsequently performs artificial intelligence tasks.

    Neural computing is based on the premise that the brain, given sensors and a body, builds up its own hidden rules through what is usually called “experience”.

    In neural computing it is believed that the cellular structures within such rules can grow and be executed are the focus of important study, as opposed to the AI concern of trying to extract the rules in order to run them on a computer.

    Based on these characteristics, neural computing can be defined as the study of cellular networks that have a propensity for storing experiential knowledge. Such systems bear a resemblance to the brain in the sense that knowledge is acquired through training rather than programming and is retained due to changes in mode functions.

    The knowledge takes the form of stable states or cycles of states in the operation of the net. A central property of such nets is to recall these states or cycles in response to the presentation of cues.
  • Neural Network Properties:
    Alberto Perito, a leading connectionist, indicated that the possibility of learning as one of the most relevant properties of neural network and connectionism. According to him, by learning, a neural network can discover some regular patterns, and the relations across them, and organize itself for making those associations.

    This feature has two very important consequences: The ability to solve problems with algorithms that are very difficult to specify, and the capacity to extract statistical models and knowledge-based rules from large data sets.

    Another property that neural networks are expected to improve is processing speed, mainly supported by the massively parallel functioning of all elements in the network. The goal of this property is to emulate the behaviour of the brain, as P. Treleaven, a connectionist had pointed out. Stressing the complexity of emulating the brain, it could be considered as a massively parallel computer with as many as 10 billion to 100 billion processing elements (neurons), each neuron connected with up to 10,000 others.

    Like the artificial neurons, the biological neurons make very simple computations. The brain is able to solve difficult vision of speech problems in approximately half a second. This is different with a neuron, as the time required (without considering the transitions across neurons) is only milliseconds. These circumstances imply that such complicated tasks as speech and vision can be carried out in only 100 processing steps; a conventional computer would need billions.

In the next issue, Part 2 will look at the applications for neural nets and will discuss a connectionist method to retrieve information in hyperdocuments.

Return to
Articles


Copyright 2003 - Automated Information Management Corporation