Willing to develop a game for fun, I looked around to see if a neural
network solution would not help and ease the development of the AI part of
the simulator. I started to learn all information touching the
neural network knowledge, then I started to be frustrated... I really did not know what was frustrating me and
my quest was first to discover an acceptable reason to justify the
frustration. One day, I realize that everyone talking about neural
network was in fact referring to a nice and neat mathematical set of
formulae. This was so strong implanted into people's mind that it never
occur to them that the original purpose of implicating mathematic into
the neuronal system was to facilitate the simulation of a neuron's brain
cell and not the justification to implement nice mathematical formulae.
I do not come from a mathematical environment as I studied informatics
and applied epistemology. My original motivation was not to make usage of
some mathematical function but merely to simulate what's in the man brain.
This is mainly why I decided to start it all over gain from scratch with
only one purpose in mind: simulate creativity.
I started to think about how an informatics' neuron could/should react. I
based my design from the biological world I discovered and started to design a didactical
behaviour; from there I adapted the neuron to respond more like a biological
one; then I slightly modify it to react to a more natural way (informatics
formulae replaced by interconnection types); the latter
step is modifying the natural behaviour to adapt with the
characteristics of the computer to create a 'synthetic' form - a more
detail explanation and example is provided in the
Evaluation Procedure section.
The Neuron Entity is not only a pool of neurons. We then choose
randomly some neuron to be the 'input' of external information . Then
we point out some neuron to be the answer. The Neural Entity will then start
to make/create connections between neuron until the 'output' neuron
receive the expected value. Then we try with another input set until the
input pool is empty. We may see it as a trivial way to learn (used
like in the back propagation or Adaline). When we have a correct connection
net, we store the connection net and associate it with an id or name (of
course we remind only the inputs that are connected to the outputs). This
will be known as a neural process. At this stage, we have a classical
neural network - this is the creation of a theoretical net. We have to gauge
it when connected to the real-world
The Neural Entity is a pool of neurons and interconnected neuron
(the amount of neuron will vary in time but we may consider 10000 as being a
small starting entity; neurons will be added when more is required). All our
external information is connected to inputs neuron that are or already
connected to a neural process. In fact, we have more inputs neuron than we
really need to. All the processes are then restored into the entity and the
output monitored. As we know an output can be verified or even cross
checked. We then need to make these verification regularly in order to
confirm the stability of our entity (a mechanism in the
Learning Procedure section explains
The AINN Base Design is the early
approach of a generalized neural network. You have the Adaline, back
propagation and Kohonen network. If you are willing to follow that path, you
can modify, adapt and update the set (you can still always return to me a
more complete lib for the open community if you are willing to support your
part). When I started to learn more about the biological neuron, I
started to make a document collecting information and transforming it into a
informatics form. The neurons as described in the Personal
NN View document offers some large extension to the informatician: a set
of interconnected neurons may provide the informatician with the option of
processing not only a requested correct answer but also dynamically modify
the flow of information and deviate it on a well-trained solution.
We have first to strictly define the meaning of a Neural Entity as
we will not talk of neural network anymore.
We will have also to specify the processes involved during an evaluation
and the learning procedure as well.
We will have provide the means to 'select' or drive the main flow
of information allowing some dissipation of the information into the
whole set of neurons and not only the one we are expecting to evaluate.
As we know our mind is not only driven by the main purpose but also by
the last event... This is a residue of the previous operation that will
deviate a little bit the current flow. These residual-souvenirs will remain
in the entity under the form of values and will affect the next flow of
information. Also to take into account is the importance of an event by
simulating the short/long term memory - this is quite useful for a
In order to complete the design we will have to simulate into the Entity
the flow of several process allowing dissipation and souvenirs in the
Entity. Then the Neural Entity will be proven to work as a system allowing a
simulation of thoughts (we will then work out later to simulate human
including stupidity or even madness).
From there we will have the opportunity to specify the implementation of
the call backs in order to infer the transmission of information.
The latter step will be to alter the normal flow with a noise function
that will simulate the concentration on one process altering slightly the
others - I believe the Hadamar Transform could do the trick.