|This article was
published in Direct Access for Canada's Information
Systems Professionals. Direct Access is published 25
times a year by Laurentian Technomedia Inc., a unit
of larentian Publishing Group.
Prepared By Dr.
Mir F. Ali
In the last issue, part
1 discussed the concepts of connectionist theory and
Artificial Intelligence (AI). Part 2 discusses
applications of the neural net plus a connectionist
method to retrieve information in hyperdocuments.
Applications selected for this article include
Interfaces, Signal and Image Processing, Hard
Learning, and Associative Learning.
Discrimination and classification represent the
domain for which neural nets are the most used in
the company. They are usually compared with standard
numerical methods. Neural networks are fed with data
coming from a signal processing stage and are
therefore considered as a data analyzer. The
following features are normally included in neural
- Passive and active sonar recognition.
- Passive sonar for transient noise analysis.
- Aerial acoustic identification.
- Identification and emission-mode recognition
- Target and sources recognition, echoes
filtering for radar.
- Identification in IR images.
- Identification of flight phases (helicopter
- Identification of industrial pieces.
It can happen that between a low level
signal-processing stage (numeric data) and a
high-level decision-making stage (symbolic data)
some parts are lacking. In such cases neural methods
- Symbolic extraction from time-frequency sonar
- Shape description for sea mines.
- Interpretation of aerial sciences.
Signal and Image processing:
Neural nets are used as low-level processors,
dealing directly with row signal: Image processing-
compression (video equipment), contour extraction
and noise reduction in satellite images. Temporal
regression analysis is financial markets.
Demultiplexing in reconstruction of radar images.
The essence of much current work in connectionist
systems relates to hard learning, which may by
described as follows:
Certain stable patterns cannot be achieved in a net
without the presence of intermediate nodes that are
not clamped units. These are required because
clamping would cause conflicts between the stable
states; later clamps modifying the logic set up by
earlier ones; and Learning is said to be hard
because the function for the intermediate units is
not distinctly stated by the described clamp
patterns. This function is molded by some global
training algorithm applied to all intermediate
nodes, the object of which is to cause changes in
the logic that supports the clamp.
A multilayered associative network is designed to
perform a pattern on the output nodes whenever
another particular pattern occurs on the input
nodes. In general, the learning algorithms should
allow arbitrary patterns on both the input and
There are two types of associative learning -
patterns association and auto-association. A pattern
association is to build up an association between a
set of patterns with another set of patterns. During
training, selected patterns are presented to both
input and output nodes.
The contents of the memory of the PLN Neurons are
modified so that whenever a particular pattern
reappears on the input nodes, the associated pattern
will appear on the output nodes. There is usually a
teacher input indicating the desired pattern
association during the training.
- Optimization - processor allocation in
- Control - last landing phase of
- Data fusion - multi-localization for
radar sources. Associative memories - undersea
sources localization. Pattern matching -
A Connectionist Method to Retrieve In
formation in Hyperdocuments: Hyperdocuments are
nonlinear documents made of linked nodes. The
content of each node is text. Picture, sound or some
mix of these in a multimedia hyperdocuments. That
makes a convenient and promising organization for
rich classes of sets of documents used in an
electronic encyclopedia, computer-aided learning,
software documentation, multi-Author editing, etc.
The inherent linked structure of the hyperdocument
provides the native basis for the user interface, by
means of controlling the displayed part of sub
graphs called browsers, and selecting a node just by
clicking on its icon. But this enjoyed high degree
of freedom has its drawbacks.
First, one user may get lost after some explorations
are conducted; second, the information stored is
split into many small units that one user must read
in a given order to understand; and third, the
actual needs of the user are not taken into account
in the navigation support, so unskilled users are
There is one way to overcome the attributes of nodes
and links in the hyperdocument. Petri nets were
proposed in 1989 as a foundation of an ordered
browsing mechanism. This way, everyone is easily
compelled to pass through prerequisites before
accessing a node with highly specialized content.
Thus the rules are static, the graph is more complex
and the user's needs and specific background are
F. Biennier, J.M. Pinon, M. Guivarch and Insa de
Lyon, in an article on the subject, described an
approach that is known as augmented query system,
which takes into account a distance between a Node
Specialization Level and a Specialization Level
Hoped by the User.
Nodes and tags (multimedia key works) form a neural
network, together with the cells' activation rules
and hypertext links.
First, the needs of information are defined. The
answer must correspond to the query and has to be
understandable by the user and adapted to the time
the user has. There are four parameters involved in
this model: Definition of his aim by tags; interest
level related to the available time; specialization
level that the user is looking for; and path needed
A query is made of tags. The information retrieval
systems are based either on Boolean or Vectorial
models. A Boolean query is a Boolean expression of
tags. In this model, the system answers cannot be
ranked and there is quite a big problem of noise and
Adaptations of this model allow the user to assign
weights to the tags. The Vectorial model uses
weighted tags. Thus the system answers may be
ranked, but the query definition is quite difficult
for a non-specialist.
A Vectorial query system and an inductive process
are adopted in this model. The Vectorial query
system uses a friendly man/machine interface, in
which the user defines, for each tag selected, a
weight in a graduate scale. The choice of tags is
also helped by the use of a thesaurus.
In fact, the problem is not only to find information
that fits the query, but also to find information
the user can understand. That is why the
Specialization Level Hoped by the User and the Node
Specialization Level are adapted in this model.
The Author of the node defines the Node
Specialization Level. For the first access, the user
defines the specialization level he or she is
looking for, but he or she can modify it dynamically
by reading other nodes.
Reading and browsing a hyperdocument supposes that
the system gives the user an entry point and a path
to guide his browsing activity. Building a path
allows us to take the prerequisites into account
dynamically and to use the proximity relationships
The Neural Network:
A connectionist approach is used to implement this
model. The neural network selected for this model,
uses bi-directional links between cells. Cells
represent either tags, or nodes of the hyperdocument.
The weights of the links represent the measures of
the power of content by a tag (for the reverse
An epigenetic network that can model thesaurus
relationships is used in this model. The weights of
the new links are computed using the old network
structure. The relationships between tags are sought
with an analysis of the network structure and
specifically on shared contents. A tool is provided
to help the designer toward this analysis.
In some sense, traditional bases of linear documents
may be considered as hyperdocuments, thanks to links
between documents or parts of the same document.
That is why the hyperdocument approach may be used
in many organizations.
Igor Aleksander, in his book Neural Computing
Architectures, addressed the question: What is the
ultimate neural computing architecture of the future
likely to be? Neural computing of ht future is not
likely to be a replacement of conventional computing
and AI programs, but rather is likely to form a
complementary technology. It would border on the
silly to create with difficulty neural computations
that can be performed with ease through conventional
The key issue, however, is that the two methods must
be able to exit under the same roof. So the ultimate
challenge for experts in computer architecture is to
exploit the two technologies within the box, while
presenting a single, flexible interface to the user.