CompAnn--Technical Internals

Table of Contents


Next: , Up: (dir)

CompAnn—Technical Internals

This manual documents how CompAnn generates the final output-ANNs and how those are structured. Additionally the algorithms employed by EvAnn are described here in detail.


Next: , Previous: Top, Up: Top

1 ANNs as Computer-Programs


Next: , Up: Programs

1.1 About Neural Networks

A ANN (Artificial Neural Network) is a system of a number of so called neurons that can be for any given point in time either active or inactive—that's somehow a “1-bit memory”.

Usually one distincts between two types of ANNs: Feed-Forward-ANNs and Recurrent ANNs; the latter are a superset of the former and far more flexible, CompAnn uses only recurrent ANNs and in the following I'll always talk about them.

Each neuron sends signals of determined strength to some other neurons at any time it is active (this strength however can be positive and negative). If the sum of all received signals from all neurons in the network is above some treshold level, the receiving neuron becomes active the next round; otherwise it will be inactive. This level is represented by a fixed bias value added to the signal sum each time and this altered sum is checked to be positive; this bias can of course be negative, too, and will in fact often be so.

One can make the activity of one neuron dependent on the activity of another neuron by connecting them by a very strong signal but also subtracting this signal strength from the bias of the depending neuron. This way it can only be activated if the other neuron was active the time before but is otherwise not compromised as long as the neuron depending on is active.

Summarizing one can see that an ANN consisting of n neurons is some sort of “computer” with n bits of dynamic memory; the connections and bias-values of the neurons represent the program being executed and are thus the ROM of this computer. The runtime-state of such such an ANN is determined exactly by the n bits describing the activity states of its neurons.


Next: , Previous: ANNs, Up: Programs

1.2 Code-Chain and Sequential Execution by Forwarding of Activtiy

CompAnn structures its generated ANNs as a set of commands which are executed in sequence; each such command has its own activator neuron. Exactly when this neuron gets activated, its the command's turn and its action is initiated. After finishing, activity is forwarded to the activator neuron of the next command; this is achieved by connecting all activator neurons in a long chain. If a single command needs more than one turns (say k) for its execution, it will be represented by k neurons in this code-chain instead of its activator only.

The very first neuron of an ANN is in the structure generated by CompAnn the activator of the first command and thus initiates the whole program; if execution is finished, the last command will forward activity to the very last neuron. If this gets activated, the program tells the interpreter it has finished its execution.


Next: , Previous: Code-Chain, Up: Programs

1.3 Passing of Input-Parameters and Returning of Output-Values

Input-Parameters are passed to a program by activating (or leaving inactive) the k neurons directly following the program-activator neuron (thus the neurons 2–(k+1)) artificially before starting it, passing it k bits of input. Here the input-parameters themselves are filled in in sequence and the bits of each input-parameter are ordered Little-Endian.

Similarly when the program finishes the calculated output-values are stored in the last l neurons directly before the program-end signalling neuron (the neurons (n-l)–(n-1)). Again, the bits are ordered Little-Endian, but the output-values themselves are in reverse order. That is, the logically first output value is physically stored in the very last neurons before the program-end marker.


Previous: Parameters, Up: Programs

1.4 Complex Interaction by Native Methods

For interaction between the running program and its environment more complex than argument and return value passing, so called Callbacks can be used. These are “native methods” that can be called from within the ANN and are implemented and executed by the runtime environment (the interpreter). They can be passed values from the ANN and they can themselves return values back to the ANN.

Such a callback consists of an activator neuron followed by neurons where the return values will be stored and neurons that will contain the input-arguments when the callback is activated. Bit-order is here Little-Endian once again, so the format is very similar to how the program in itself receives values and returns others (see Parameters), although the positions of inputs and outputs are swapped.

If the ANN activates the callback's activator neuron, the runtime environment will suspend execution of the ANN, interpret the input arguments, perform the action of the callback, set the output neurons correspondingly and resume execution of the ANN.


Next: , Previous: Programs, Up: Top

2 SmartNeuron—Some Tricks with Neurons


Next: , Up: SmartNeuron

2.1 A SmartNeuron

SmartNeurons are objects internally used by CompAnn for generation of the final ANN; they are useful because for instance “constant neurons” (representing constants in the program) that are always active/inactive and that need not be actual neurons in the output because they can be calculated into the constant bias values of their connectees can be used internally like real neurons and this integration step is hidden from the main logic constructing the network.

Internally that's just an interface with abstract methods that connect a “neuron” to others and insert it into the existing network being built. The SmartNeuron implementation handles, based on the type of neuron, how these actions are performed in detail and whether or not an actual neuron will be added to the network or some other trickery be done.

Additionally, SmartNeurons can build “complex” connections; for instance, a inverse connection is one that sends a signal if and only if the sender neuron is inactive. This is behind the scenes realized by a negative connection weight and adding a positive bias value to the receiving neuron. If the neuron is active, the negative connection nullifies the bias change; otherwise, this bias change acts as signal being received when the sender is inactive.


Next: , Previous: SmartNeuron Basics, Up: SmartNeuron

2.2 Ordinary Neurons

The RealNeuron is a SmartNeuron simply representing an ordinary, real neuron that will actually be connected to others and inserted into the network. It will be present one-by-one in the final network being output.


Next: , Previous: RealNeuron, Up: SmartNeuron

2.3 Constant Neurons

Constant neurons are at all times either active or inactive; because of this, they need not be present in the network as actual neurons. Simply adding their “connection weights” to the bias values of their connectees if they are active has the same effect. They are used for instance to represent constants used in the program.

This type of neuron connects itself to another one by simply adding the signal strength to the bias value of the receiving neuron if they are active and by doing nothing if they are inactive. Insertion into the network is simply a dummy operation that does not have any effect.


Previous: ConstantNeuron, Up: SmartNeuron

2.4 Forwarding Neurons

This type of SmartNeuron is some sort of access-key to one or more other neurons; connections being made to them are simply “forwarded” immediatelly to these other neurons.

ForwardNeurons are useful for instance for goto-commands directing incoming activity directly to the activator of the jump-target command. This way not a single time-cycle is wasted for the jump but they can be used internally as if they were neurons actually present.


Next: , Previous: SmartNeuron, Up: Top

3 Assembling SAM to ANNs


Next: , Up: Assembler

3.1 SAM—Introduction

CompAnn does not translate a program written in Anne directly into a neural network, but instead inserts a temporary format inbetween, SAM. SAM can be compared roughly to assembly languages of computers that are the target of a high-level language compiler before they are themselves translated to machine-code. It may be even more true to compare SAM to a simple RTL (Register Transfer Language) that is used in compilers for the very reasons that CompAnn uses SAM.

One useful aspect of SAM is that it relieves the compiler of the need to translate complex expressions like (a^b)&3 directly into neurons but it can split this up into sub-expressions first and translate those in the next step seperatelly to ANN-parts.

Additionally SAM is a good basis for compiler optimizations, as the original program is already simplified to a small number of basic commands but still represented as real program and not fully translated into an ANN, so some structural information is still present.


Next: , Previous: SAM, Up: Assembler

3.2 Variables and Arguments

Variables and input-/output-values are absolutely equivalent for SAM and therefore also for the generated ANNs. The only difference is that inputs are “variables” placed in the very first neurons, outputs are placed in the very last neurons and ordinary variables are placed somewhere unspecified inbetween.

A variable of k bits is implemented by k neurons; their activity status stores one of the bits each. Each neuron is connected to itself to make the saved bits persist: An active neuron re-activates itself each turn whereas an inactive one stays inactive.

The value of such a variable can be changed from outside by sending a much stronger positive/negative signal to the neurons as the one used for self-reactivation, overriding it.


Next: , Previous: SAM Variables, Up: Assembler

3.3 Assignments and Calculations


Next: , Up: SAM Assignments

3.3.1 About Assignments in General

Assignments are—as commands—dependent on their activator neuron to be executed. Unfortunatelly it's only possible to make neurons, not connections depend on activity/inactivity, one can not connect the assingee directly to the target neurons as this would perturbate those bits even if the assignment is not activated.

To solve this problem, each assignment has to store the value being assigned to the target first into a temporary variable which consists of k neurons not connected to itself and that depend each one on the assignment's activator neuron. This ensures that this variable contains the value to assign if and only if the assignment is active.

Now one can connect these intermediate neurons directly to the target neurons, activating the bits in the target that should be set to one. To clear the target first, the target neurons are connected inverse to the assignment's activator neuron, too. This sets all bits to zero in the cycle before the correct ones are set to one.

The assignment itself happens this way another cycle later, so one has to insert a one-iteration delay additionally by inserting one dummy neuron into the chain of activators; this one is activated by the assignment's activator and activates the following command, but as we want it one cycle delayed.

The way how the intermediate memory is set depends on the flavour of assignment or calculation being executed.


Next: , Previous: SAM Assignments general, Up: SAM Assignments

3.3.2 Copying (direct assingments)

Direct assignments (of the form a=b, without any calculation needed) are very easy to implement building on the general principle described above. The neurons of the source variable can be connected directly to the intermediate neurons, neither additional neurons nor additional cycles are needed.


Next: , Previous: SAM Assignments direct, Up: SAM Assignments

3.3.3 Bitwise Operations


Next: , Up: SAM Bitwise
3.3.3.1 Binary “And”

For a binary “and” the temporary neurons are connected to the source neurons in a way that its bits are set, if both corresponding source bits are set.

For this, both source bits are connected to the target bit, and the target gets an additional negative bias value, somehow “neutralizing” one of those connections, so really both must be active at the same time.


Next: , Previous: SAM And, Up: SAM Bitwise
3.3.3.2 Binary “Or”

The binary “or” sets a target bit if at least one of the corresonding input bits is set. This is implemented by connecting both of them to the target neuron.


Next: , Previous: SAM Or, Up: SAM Bitwise
3.3.3.3 Binary Exclusive “Or”

The binary exclusive “or” `a^b' is calculated internally as `(a|b)&(~(a&b))'—that is “a oder b, but not a and b”.

To implement this, both an additional temporary memory (for the value of `a&b' and a third activator neuron are needed. This memory is connected to `a&b', See SAM And, and made dependent on the activator.

The target bits are connected as `a|b', See SAM Or, but they are also connected inverse to the new temporary memory so they get activated only if not both inputs (`a&b') are active.


Previous: SAM Xor, Up: SAM Bitwise
3.3.3.4 Binary Bitwise “Not”

Binary bitwise “not” is implemented by connecting the input bits inverse to the corresponding output bits.


Next: , Previous: SAM Bitwise, Up: SAM Assignments

3.3.4 Arithmetical Calculations


Next: , Up: SAM Arithmetics
3.3.4.1 Inkrementing Numbers

Naively, one could see incrementation as special-case of addition and simply reuse addition logic for this (see SAM Plus); but incrementing has a constant runtime, independent of the number of bits in the operand. And it's probably a quite common operation, so optimization here is worth the effort.

The algorithm used is this one: Look for the least-significant zero in the bits of the number and negate all bits from that position (inclusive) downwards. This results in all zeros below this zero (where before everything was ones) and the zero itself becomes the first one. Above this position everything stays untouched.

The first step is to find this zero; for this operation, a temporary array of n neurons, where n is the number of bits in the operand, is used. There, only the neuron at the position of the zero we're looking for will be activated. Every input bit is connected inverse with its corresponding temporary bit and negative inverse with all more-significant temporary bits.

In the next iteration, each input bit is connected to the corresponding output bit to copy the operand's value. The temporary bits are connected to the outputs, too (resulting in the lowest zero becoming a one), and negative with all less-significant output bits to clear all ones there and make those zeros.

For the special case of the Operand consisting only of ones, all output bits are made dependent on the temporary bits such that they only get activated only if there's a one somewhere in this array. They are just given a negative bias and then they are connected to each temporary bit. If there's nowhere a one and the operand consisted of ones only, the result will be zeros; that's the expected output.


Next: , Previous: SAM Inc, Up: SAM Arithmetics
3.3.4.2 Invert Numbers

CompAnn uses two's complement format of negative integers; converting into this format (better, negating a number) is a bitwise negation followed by an increment by one. The exact algorithm is: Find the least-significant one; copy it and the zeros below it, and invert all more-significant bits.

This is obviously very similar to the incrementation algorithm (see SAM Inc), and that's true for the implementation, too, so I'll explain it here very briefly:

In a temporary array of neurons the lowest one is found, connections are made inverse to those of the incrementation operation. The inputs are then connected inverse with the target bits, the one has to be copied, and all bits below it have to be cleared. For this, the temporary memory is connected the same with the target bits as for incrementation.


Previous: SAM Invert, Up: SAM Arithmetics
3.3.4.3 Addition

Addition works similar to the way usually used “by hand” to calculate sums; the result bits are calculated step-by-step from least- to most-significant.

Each such bit depends on the corresponding bits of both summands and—excapt the least significant bit—on the carry bit. The new carry bit has to be activated if there are at least two ones in those three bits, while the result bit should be active if there's exactly one or exactly three.

The new carry bit is easy to implement, it is simply connected with each of the three input bits and gets a negative bias (see the binary “and” operation, SAM And).

One iteration later, the result bit can be calculated, too. It's also connected with all three input bits, but additionally negative and with double strength to the freshly calculated new carry bit. That way it will not get activated for exactly two active input bits.

If everything is connected that way, after n+1 iterations (n is the number of bits in the result) the neuron activities should get stable and the result is “finished”. It's not neccessary to make the whole operation depend on any activator neurons, we just need a chain of neurons forwarding the activity so it gets delayed long enough for the calculation to finish.

I believe it should be possible to work, independent of the number of bits in the summands, with only three carry neurons that are re-used after each other. That's however not yet implemented.


Previous: SAM Arithmetics, Up: SAM Assignments

3.3.5 Comparison of Numbers


Next: , Up: SAM Comparisons
3.3.5.1 “Less than”

For a “less than” comparison the sign of the difference of both operands (in the correct order) is examined. A special neuron is connected in a way it gets activated exactly if this difference is positive—that is, the comparison true.

This is done by connecting each bit with the strength of its own value of the operands to this neuron; the bits of the first operand negative, the bits of the second one positive. For instance, the kth-least-significant bits are connected with the strength 2**(k-1), positive or negative depending on which operand they belong to.


Next: , Previous: SAM LT, Up: SAM Comparisons
3.3.5.2 “Less than or equal”

This is done similarly to “less than” (see SAM LT), but here the target neuron gets an additional bias 1 so it will be activated for total equality, too.


Next: , Previous: SAM LE, Up: SAM Comparisons
3.3.5.3 “Equal”

Ordinary comparison is implemented with the help of two neurons as those used for “less than” operations, See SAM LT. One is connected as `a<b' and one as `b<a'.

The result of the equality-comparison is then “neither neuron 1 nor neuron 2”, that is, “not a<b and not b<a”.


Previous: SAM Equal, Up: SAM Comparisons
3.3.5.4 “Not equal”

“Not equal” is done like a comparison for equality (see SAM Equal), but the result will be activated if one of the two “less-than neurons” is active.


Next: , Previous: SAM Assignments, Up: Assembler

3.4 Jumps in SAM


Next: , Up: Gotos

3.4.1 Unconditional Jumps

Unconditional (simple) jumping instructions in SAM are implemented by simply connecting their activator neuron to the activator of the target-command. That's exactly the same process that is already used each turn for activation forwarding anyway.

Jumps are implemented internally using ForwardNeurons (see ForwardNeuron); this means that in the final ANN they do neither cost any neuron nor any turn in time!


Previous: SAM Goto, Up: Gotos

3.4.2 Conditional Jumps

Conditional jumps (jump only happens if a condition is true) form the basis of control structures of high-level languages like if-, while- or for-commands. In computer assembly languages there's usually a whole set of different flavours of conditional jumps, like “Jump on Equal”, “`Jump on not Equal”, “Jump on Zero”, “Jump on not Zero”...

Such an instruction needs besides the activator neuron two other neurons, one for “if” (condition is true) and one for “else” (condition is false). The condition itself is the value of one bit, that is, the activity of some other neuron.

If- and else-neurons are made dependent on the activator and additionally connected in a way such that depending on the condition bit either the if- or the else-neuron will get activated. This is done by connecting the condition bit directly with the if-neuron and inverse with the else-neuron.

Finally, the if-neuron is connected to the activator of the target-command, whereas the else-neuron is connected to the command following the jump (this command is activated if and only if the condition is false).


Next: , Previous: Gotos, Up: Assembler

3.5 Theads


Next: , Up: SAM Threads

3.5.1 Starting Threads

In SAM, threads are generated by an asynchronous jump, that splits the line of execution to continue both with the following command and the jump target.

On the level of neurons that can be realized very easily, the activator-neuron of this jump is simply connected to both activators of the following commands, activating them both. This neuron can be implemented as ForwardNeuron (see ForwardNeuron), which means that thread-generation has no costs at all. This also implies that more than one splits can be done in sequence without any costs generating resulting in two or more new threads; because of this, no additional command is needed to realize such complex splits as they can be implemented without any costs by means of this simple asynchronous jump.


Next: , Previous: SAM Thread Start, Up: SAM Threads

3.5.2 End Threads

To end a thread, the line of execution is simply aborted by not forwarding the activity after the last command. For this, the activator of this last command is simply left unconnected so no following command will be activated.

This simple method to end threads is of course very cheap but also not as safe and generally useful as safe re-joining of threads, See SAM Thread Date.


Previous: SAM Thread Die, Up: SAM Threads

3.5.3 Rejoining of Threads

The safest way to “get rid of” existing threads is to re-join them (date), because this not just kills one thread while another one continues execution without interaction, but it is made sure that both (or all) threads have reached the end of the multi-threading block before the code following it will be executed. This is needed for instance if the command after the block of parallel execution must not be executed before all threads have finished their work share but it is not clear at compilation time which of these threads will take the longest time.

A date is implemented in the network by creating a neuron for each thread for that must be waited; this neuron gets connected to itself and is connected from the activator of the last commands in its thread's code block so it gets activated if the corresponding thread finishes. When all of these neurons are activated, meaning that all threads have finished, the execution resumes after the parallel block: For this, another neuron is used that depends on all of these marker neurons and that in turn activates the following command.

It's easy to see that this additional flexibility and safety comes with an additional cost of neurons and two cycles of runtime. So whenever it is possible, simply ending one thread (see SAM Thread Die) is cheaper and recommended.


Previous: SAM Threads, Up: Assembler

3.6 Performance-Costs

In this section I'll list all SAM language elements together with their costs in terms of neurons and runtime cycles, so you can better judge and compare them. Details about the implementation and how these costs are derived can be found in the corresponding other sections.

Operation Neurons Cycles Comment


Program Overhead 2 1 Arguments count as variables.


Variable n - n bits of memory.
Constant 0 - Calculated into the network via ConstantNeurons.


Jump 0 0
Conditional Jump 3 1


Start Thread 0 0 As the jump.
End Thread 0 0
Date Thread n+1 2 For n threads.


Assignment n+1 2 For n target bits; the calculation will be added to these costs.

For assignments the costs of the calculation itself must be added; for d bits of the target (that is, the generally used number of bits):

Operation Neurons Cycles Comment


a&b 0 0
a|b 0 0
a^b d+1 1
~a 0 0


a<b 0 0 Constant as d must be 1.
a<=b 0 0 See above.
a==b 2 1 See above.
a!=b 2 1 See above.


a# d+1 1
-a d+1 1 As incrementation.
a+b 3*d+1 d+1


Next: , Previous: Assembler, Up: Top

4 Compiler-Optimization on SAM-Code


Next: , Up: Optimization

4.1 Bitsharing

Because of the way the Anne-compiler generates SAM-code (using temporary variables to split up complex expressions), the code contains many variables that are only used over a very limited range during execution. Because these variables need neurons to save their values, but they are unused most of the time, Bitsharing is a good way to lower the number of neurons in the final network.

With this technique, the memory neurons are not possessed fully by the variable they back, but they are shared among variables that are never used in parallel. This is done via a pool of possible memory neurons, and variables can request neurons from the pool, and return them later when they are no more used, much similar to dynamic memory allocation; but this happens at compile time and the optimizer determines when a variable is no more used, so the programmer has not to think about this.

To find the range in the program a variable is used, the following basic algorithm can be used:

This way, a variable's range is widened to the whole code block that could be executed between the first and last occurance of the variable. Nevertheless it's surely not the best algorithm, as this one generates possibly “too large” ranges; this of course results in a less-effective optimization.

As, however, most SAM-code is output of the Anne-compiler where variables used have a scope, one can clip the calculated variable-range with this scope, as the maximum variable range is of course its scope—each jump that would leave the scope can be ignored for purposes of this optimization.

This insight makes optimization a lot more effective, as many variables declared inside large loop-bodies do not need to have the whole block as their range (because of the back-jump at its end) and thus can eventually share their neurons with other variables of the same block. Their values need not persist from one loop-cycle to the next.


Previous: Bitsharing, Up: Optimization

4.2 Compactor


Next: , Up: Compactor

4.2.1 About Compactor

The main idea behind Compactor is to let the compiler automatically parallelize as much lines of code as possible, as the extraordinary possibility of Anne to have real and unlimited multi-threading has many opportunities for performance gain. Manually writing many, very small parallel-blocks will, however, impact readability of the code.

The compiler tries to automatically do this parallelization with the help of Compactor (it works best for long blocks, partially depending on each other, of assignments, that is, calculations), so the original code stays untouched and clear; additionally this way the programmer does not have to worry about getting the dependency-analysis wrong himself so the code gets buggy.


Next: , Previous: Compactor About, Up: Compactor

4.2.2 CompactTree

Jumps, conditional or unconditional, synchronous or asynchronous, “make Compactor's life harder”. Because of that, the SAM code is first transformed into a tree-structure, the CompactTree. The compactor is then used on this tree. The property of this structure is that sibling-nodes do not have jumps connecting them, which means that, if the data-flow allows it, those can be safely parallelized. Now the Compactor-algorithm can be used recursively on this tree.

A critical command is a jump or a jump-target, an uncritical one every other command (typically an assignment). The CompactTree consists of two types of nodes, bound and unbound ones. Each node has a sequence of other nodes and/or SAM commands as childs. Unbound nodes only have uncritical commands as descendants and can because of this be restructured (“compacted”), while bound nodes also have some critical nodes as descendants which means that the Compactor algorithm may only operate on their unbound childs.

A simple algorithm to build such a CompactTree is to put the whole code in the root-node and mark that one as bound. Now a sequence of uncritical commands can be converted to a single unbound node having them as childs that can be parallelized afterwards. It's surely possible to develop more sophisticated, recursive algorithms, but even this simple one should be good for the start, and will probably already handle most cases where Compactor can achieve speed-ups.

This algorithm is linear in time to the number of code “lines” or commands.


Previous: CompactTree, Up: Compactor

4.2.3 The Compactor-Algorithm

Once the CompactTree is generated, it will be processed recursively; each block processes its childs first recursively and afterwards tries to parallelize its childs if the node itself is unbound.

To check that it's allowed to parallelize two blocks of code, Compactor assumes that this is correct, if

  1. there's no variable accessed by both blocks, or
  2. such an access is always read-only.

Data-flow between child-blocks can be represented as graph or network where the variables and blocks are vertices and the variable-accesses directed edges connecting the blocks to the variables, the direction depends on whether the access is reading or writing.

Variables that are only read (that is, variable vertices that are source nodes) can be removed. If the graph consists only of components that have no interconnections, the corresponding blocks can be parallelized.

The difficulty is to find the best subsets of child-blocks for parallelization—for a good result it can be neccessary to split a block so independent components are achieved or to reorder blocks; moving a block is possible if it does not depend any line it is moved “across”.


Next: , Previous: Optimization, Up: Top

5 Compiling of Anne

I believe the techniques described herein are very similar to those already used in ordinary compilers, for instance translating C to x86 assembly, I'll talk very brievly about how the high-level language Anne is translated into the “network assembly” SAM.


Next: , Up: Compiler

5.1 Evaluation of Expressions

Complex expressions consisting of multiple operators, operands and sub-expressions have to be compiled into a sequence of SAM assignments (see SAM Assignments general.

This is done from the innermost to the outermost expression, building assignments to temporary variables and reusing their values for the outer expressions. The large number of variables needed because of this process is one main reason for the usability of the BitSharing-optimization technique (see Bitsharing.


Next: , Previous: Anne Expressions, Up: Compiler

5.2 Kontrollstrukturen in Anne

For control structures the compiler uses jumping and labels in form of NOP commands very often and liberally. But because those two need neither space nor execution time, this is a good and often simple solution to many problems.


Next: , Up: Anne Control Structures

5.2.1 If-Then-Else-Conditionals in Anne

If-Then-Else constructs build upon conditional jumps (see SAM If Goto); at the start of the conditional block such a jump is generated that skips it simply if the condition is false.

If there's additionally an else-block this command instead jumps to the start of the else-block, while at the end of the if-block there's an unconditional jump after the else-block where normal execution continues.


Previous: Anne If, Up: Anne Control Structures

5.2.2 While- and Do-While-Loops

While- and Do-While-loops are suited for all needs of procedural and structured programming and these constructs themselves build upon conditional jumps in SAM, SAM If Goto.

At the beginning of a While loop a conditional jump is generated that jumps after the end of its block if the loop invariant is false, breaking the loop. To make this a loop, at the end of the loop body an unconditional jump is placed to jump back to this condition-check.

Do-While loops can be implemented even easier, just put a conditional jump at the end of the loop body to jump back to its start if the condition is still true.


Previous: Anne Control Structures, Up: Compiler

5.3 Multi-Threading

Implementation of asynchronous and parallel execution in SAM is relatively simple: asynchronous statements generate a new thread which is terminated by die at the end. For parallel execution and re-joining at the end a date is used to wait for all execution branches (see SAM Thread Date).


Next: , Previous: Compiler, Up: Top

6 Evolutional Algorithms of EvAnn

This chapter describes the algorithms used by EvAnn for neuro-evolution; they are based on those of the project ANNEvolve.


Next: , Up: EvAnn

6.1 About Neuro-Evolution

Neuro evolution is a technique about “enhancing ANNs automatically”, that is, without human interaction just by computing time. As the name already suggests, it's inspired both by the evolution and genetic processes.

The basic principle is: A population of ANNs is generated; this can be done completely random or based on some already existing but suboptimal solutions, for instance compiled by CompAnn. Then the population is evaluated, which means the fitness of each and every inhabitant is determined by some means depending on the problem being solved. The weakest ANNs are removed from the population and the spaces freed by this process are then filled by new ANNs derived from the fitter individuals (their “offspring”).

If this process is repeated long enough, the fitness of the population should generally progress and become better over time.


Next: , Previous: Neuroevolution, Up: EvAnn

6.2 Probability-Formulas

Random numbers play a major part in the algorithms described below, and because of this also probabilities and formulas to calculate the best probability for certain events are important.

The formulas used and the parameters they depend on are in fact chosen arbitrarily hoping they will be good choices as they surely influence both performance and overall success of the evolution process. Parameters are:

PopulationSize
The number of individuals forming the population. Not directly used in the probability formulas, but nevertheless one of the most important parameters.
WeightsFactor
Determines the range of possible values for random choices of connection weights and bias values.
MutationProbability
The probability of a mutation happening when mating to produce offspring.
CrossOverProbability
Probability of a crossing over.
ConnectionProbability
The probability that two not connected neurons get connected by a newly established connection during mutation.
DisconnectionProbability
The probability that two connected neurons get disconnected during mutation.
DeathFraction
The proportion of the population that will be removed and replaced by offspring each time.

Sometimes also the fitness of the individual in question compared to the overall fitness of the population is used. The overall fitnes is described by the variables MinFitness, MaxFitness, AvgFitness for minimum, maximum and average population fitness as well as Range being (MaxFitness-MinFitness, LowRange being (AvgFitness-MinFitness) and HighRange being (MaxFitness-AvgFitness).

The probabilities for certain events and processes:

Random value
A random connection weight and bias value will be chosen as normal distribution around 0 with standard deviation WeightFactor.
Mutation factor
Mutating a value means multiplying it with a chosen mutation factor; this value is chosen by a normal distribution with standard deviation 2 around the average 1.
Crossing-Over
Happens with probability CrossOverProbability. If it happens, one third of the time in one point and two thirds in two points.
Mating
A given individual will be chosen to mate and produce offspring with the probability (Fitness-MinFitness)/Range.
Death
If the individual's fitness is below average: DeathFraction+(1-DeathFraction)*(AvgFitness-Fitness)/LowRange; otherwise: DeathFraction*(1-(Fitness-AvgFitness)/HighRange).


Next: , Previous: EvAnn Probs, Up: EvAnn

6.3 Chromosomes

While networks are represented during execution by some object structure, they are converted to chromosome form for genetic processes.

This means serializing the network into a sequence of numbers; here, all neurons are described one by one and each neuron is represented by its bias followed by the connection values to each other neuron in the network (or 0 if they are not connected). Thus for a network with n neurons, each neuron is serialized as 1+n numbers.


Next: , Previous: Chromosomes, Up: EvAnn

6.4 Mutations

A mutation alters a randomly chosen number of the chromosome. This is done as long as a die-roll with MutationProbability is true in sequence, so an undertermined number of times.

If the value chosen is different from 0 (a connection there) and a die-roll with DisconnectionProbability is true, it is set to 0. If the value is 0 and ConnectionProbability is true, it will be set to a random value.

Otherwise the value is multiplied by a random mutation factor.


Next: , Previous: Mutations, Up: EvAnn

6.5 Crossing-Over

A crossing-over changes two chromosomes “merging” them. This can happen in one or two points but is only possible for chromosomes with the same length (ANNs with the same number of neurons).

For a one-point crossing-over the parts of both chromosomes after a chosen point are swapped; for a two-point crossing-over the part between two chosen points is swapped.


Previous: Crossing-Over, Up: EvAnn

6.6 Offspring

To replace removed individuals parents are chosen pairwise which produce offspring also pairwise. This happens by copying both parents, mutating the childs and rolling a crossing-over.


Previous: EvAnn, Up: Top

Appendix A GNU Free Documentation License

		GNU Free Documentation License
		  Version 1.2, November 2002


 Copyright (C) 2000,2001,2002  Free Software Foundation, Inc.
     51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.


0. PREAMBLE

The purpose of this License is to make a manual, textbook, or other
functional and useful document "free" in the sense of freedom: to
assure everyone the effective freedom to copy and redistribute it,
with or without modifying it, either commercially or noncommercially.
Secondarily, this License preserves for the author and publisher a way
to get credit for their work, while not being considered responsible
for modifications made by others.

This License is a kind of "copyleft", which means that derivative
works of the document must themselves be free in the same sense.  It
complements the GNU General Public License, which is a copyleft
license designed for free software.

We have designed this License in order to use it for manuals for free
software, because free software needs free documentation: a free
program should come with manuals providing the same freedoms that the
software does.  But this License is not limited to software manuals;
it can be used for any textual work, regardless of subject matter or
whether it is published as a printed book.  We recommend this License
principally for works whose purpose is instruction or reference.


1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work, in any medium, that
contains a notice placed by the copyright holder saying it can be
distributed under the terms of this License.  Such a notice grants a
world-wide, royalty-free license, unlimited in duration, to use that
work under the conditions stated herein.  The "Document", below,
refers to any such manual or work.  Any member of the public is a
licensee, and is addressed as "you".  You accept the license if you
copy, modify or distribute the work in a way requiring permission
under copyright law.

A "Modified Version" of the Document means any work containing the
Document or a portion of it, either copied verbatim, or with
modifications and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of
the Document that deals exclusively with the relationship of the
publishers or authors of the Document to the Document's overall subject
(or to related matters) and contains nothing that could fall directly
within that overall subject.  (Thus, if the Document is in part a
textbook of mathematics, a Secondary Section may not explain any
mathematics.)  The relationship could be a matter of historical
connection with the subject or with related matters, or of legal,
commercial, philosophical, ethical or political position regarding
them.

The "Invariant Sections" are certain Secondary Sections whose titles
are designated, as being those of Invariant Sections, in the notice
that says that the Document is released under this License.  If a
section does not fit the above definition of Secondary then it is not
allowed to be designated as Invariant.  The Document may contain zero
Invariant Sections.  If the Document does not identify any Invariant
Sections then there are none.

The "Cover Texts" are certain short passages of text that are listed,
as Front-Cover Texts or Back-Cover Texts, in the notice that says that
the Document is released under this License.  A Front-Cover Text may
be at most 5 words, and a Back-Cover Text may be at most 25 words.

A "Transparent" copy of the Document means a machine-readable copy,
represented in a format whose specification is available to the
general public, that is suitable for revising the document
straightforwardly with generic text editors or (for images composed of
pixels) generic paint programs or (for drawings) some widely available
drawing editor, and that is suitable for input to text formatters or
for automatic translation to a variety of formats suitable for input
to text formatters.  A copy made in an otherwise Transparent file
format whose markup, or absence of markup, has been arranged to thwart
or discourage subsequent modification by readers is not Transparent.
An image format is not Transparent if used for any substantial amount
of text.  A copy that is not "Transparent" is called "Opaque".

Examples of suitable formats for Transparent copies include plain
ASCII without markup, Texinfo input format, LaTeX input format, SGML
or XML using a publicly available DTD, and standard-conforming simple
HTML, PostScript or PDF designed for human modification.  Examples of
transparent image formats include PNG, XCF and JPG.  Opaque formats
include proprietary formats that can be read and edited only by
proprietary word processors, SGML or XML for which the DTD and/or
processing tools are not generally available, and the
machine-generated HTML, PostScript or PDF produced by some word
processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself,
plus such following pages as are needed to hold, legibly, the material
this License requires to appear in the title page.  For works in
formats which do not have any title page as such, "Title Page" means
the text near the most prominent appearance of the work's title,
preceding the beginning of the body of the text.

A section "Entitled XYZ" means a named subunit of the Document whose
title either is precisely XYZ or contains XYZ in parentheses following
text that translates XYZ in another language.  (Here XYZ stands for a
specific section name mentioned below, such as "Acknowledgements",
"Dedications", "Endorsements", or "History".)  To "Preserve the Title"
of such a section when you modify the Document means that it remains a
section "Entitled XYZ" according to this definition.

The Document may include Warranty Disclaimers next to the notice which
states that this License applies to the Document.  These Warranty
Disclaimers are considered to be included by reference in this
License, but only as regards disclaiming warranties: any other
implication that these Warranty Disclaimers may have is void and has
no effect on the meaning of this License.


2. VERBATIM COPYING

You may copy and distribute the Document in any medium, either
commercially or noncommercially, provided that this License, the
copyright notices, and the license notice saying this License applies
to the Document are reproduced in all copies, and that you add no other
conditions whatsoever to those of this License.  You may not use
technical measures to obstruct or control the reading or further
copying of the copies you make or distribute.  However, you may accept
compensation in exchange for copies.  If you distribute a large enough
number of copies you must also follow the conditions in section 3.

You may also lend copies, under the same conditions stated above, and
you may publicly display copies.


3. COPYING IN QUANTITY

If you publish printed copies (or copies in media that commonly have
printed covers) of the Document, numbering more than 100, and the
Document's license notice requires Cover Texts, you must enclose the
copies in covers that carry, clearly and legibly, all these Cover
Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
the back cover.  Both covers must also clearly and legibly identify
you as the publisher of these copies.  The front cover must present
the full title with all words of the title equally prominent and
visible.  You may add other material on the covers in addition.
Copying with changes limited to the covers, as long as they preserve
the title of the Document and satisfy these conditions, can be treated
as verbatim copying in other respects.

If the required texts for either cover are too voluminous to fit
legibly, you should put the first ones listed (as many as fit
reasonably) on the actual cover, and continue the rest onto adjacent
pages.

If you publish or distribute Opaque copies of the Document numbering
more than 100, you must either include a machine-readable Transparent
copy along with each Opaque copy, or state in or with each Opaque copy
a computer-network location from which the general network-using
public has access to download using public-standard network protocols
a complete Transparent copy of the Document, free of added material.
If you use the latter option, you must take reasonably prudent steps,
when you begin distribution of Opaque copies in quantity, to ensure
that this Transparent copy will remain thus accessible at the stated
location until at least one year after the last time you distribute an
Opaque copy (directly or through your agents or retailers) of that
edition to the public.

It is requested, but not required, that you contact the authors of the
Document well before redistributing any large number of copies, to give
them a chance to provide you with an updated version of the Document.


4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under
the conditions of sections 2 and 3 above, provided that you release
the Modified Version under precisely this License, with the Modified
Version filling the role of the Document, thus licensing distribution
and modification of the Modified Version to whoever possesses a copy
of it.  In addition, you must do these things in the Modified Version:

A. Use in the Title Page (and on the covers, if any) a title distinct
   from that of the Document, and from those of previous versions
   (which should, if there were any, be listed in the History section
   of the Document).  You may use the same title as a previous version
   if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities
   responsible for authorship of the modifications in the Modified
   Version, together with at least five of the principal authors of the
   Document (all of its principal authors, if it has fewer than five),
   unless they release you from this requirement.
C. State on the Title page the name of the publisher of the
   Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications
   adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice
   giving the public permission to use the Modified Version under the
   terms of this License, in the form shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections
   and required Cover Texts given in the Document's license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled "History", Preserve its Title, and add
   to it an item stating at least the title, year, new authors, and
   publisher of the Modified Version as given on the Title Page.  If
   there is no section Entitled "History" in the Document, create one
   stating the title, year, authors, and publisher of the Document as
   given on its Title Page, then add an item describing the Modified
   Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for
   public access to a Transparent copy of the Document, and likewise
   the network locations given in the Document for previous versions
   it was based on.  These may be placed in the "History" section.
   You may omit a network location for a work that was published at
   least four years before the Document itself, or if the original
   publisher of the version it refers to gives permission.
K. For any section Entitled "Acknowledgements" or "Dedications",
   Preserve the Title of the section, and preserve in the section all
   the substance and tone of each of the contributor acknowledgements
   and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document,
   unaltered in their text and in their titles.  Section numbers
   or the equivalent are not considered part of the section titles.
M. Delete any section Entitled "Endorsements".  Such a section
   may not be included in the Modified Version.
N. Do not retitle any existing section to be Entitled "Endorsements"
   or to conflict in title with any Invariant Section.
O. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or
appendices that qualify as Secondary Sections and contain no material
copied from the Document, you may at your option designate some or all
of these sections as invariant.  To do this, add their titles to the
list of Invariant Sections in the Modified Version's license notice.
These titles must be distinct from any other section titles.

You may add a section Entitled "Endorsements", provided it contains
nothing but endorsements of your Modified Version by various
parties--for example, statements of peer review or that the text has
been approved by an organization as the authoritative definition of a
standard.

You may add a passage of up to five words as a Front-Cover Text, and a
passage of up to 25 words as a Back-Cover Text, to the end of the list
of Cover Texts in the Modified Version.  Only one passage of
Front-Cover Text and one of Back-Cover Text may be added by (or
through arrangements made by) any one entity.  If the Document already
includes a cover text for the same cover, previously added by you or
by arrangement made by the same entity you are acting on behalf of,
you may not add another; but you may replace the old one, on explicit
permission from the previous publisher that added the old one.

The author(s) and publisher(s) of the Document do not by this License
give permission to use their names for publicity for or to assert or
imply endorsement of any Modified Version.


5. COMBINING DOCUMENTS

You may combine the Document with other documents released under this
License, under the terms defined in section 4 above for modified
versions, provided that you include in the combination all of the
Invariant Sections of all of the original documents, unmodified, and
list them all as Invariant Sections of your combined work in its
license notice, and that you preserve all their Warranty Disclaimers.

The combined work need only contain one copy of this License, and
multiple identical Invariant Sections may be replaced with a single
copy.  If there are multiple Invariant Sections with the same name but
different contents, make the title of each such section unique by
adding at the end of it, in parentheses, the name of the original
author or publisher of that section if known, or else a unique number.
Make the same adjustment to the section titles in the list of
Invariant Sections in the license notice of the combined work.

In the combination, you must combine any sections Entitled "History"
in the various original documents, forming one section Entitled
"History"; likewise combine any sections Entitled "Acknowledgements",
and any sections Entitled "Dedications".  You must delete all sections
Entitled "Endorsements".


6. COLLECTIONS OF DOCUMENTS

You may make a collection consisting of the Document and other documents
released under this License, and replace the individual copies of this
License in the various documents with a single copy that is included in
the collection, provided that you follow the rules of this License for
verbatim copying of each of the documents in all other respects.

You may extract a single document from such a collection, and distribute
it individually under this License, provided you insert a copy of this
License into the extracted document, and follow this License in all
other respects regarding verbatim copying of that document.


7. AGGREGATION WITH INDEPENDENT WORKS

A compilation of the Document or its derivatives with other separate
and independent documents or works, in or on a volume of a storage or
distribution medium, is called an "aggregate" if the copyright
resulting from the compilation is not used to limit the legal rights
of the compilation's users beyond what the individual works permit.
When the Document is included in an aggregate, this License does not
apply to the other works in the aggregate which are not themselves
derivative works of the Document.

If the Cover Text requirement of section 3 is applicable to these
copies of the Document, then if the Document is less than one half of
the entire aggregate, the Document's Cover Texts may be placed on
covers that bracket the Document within the aggregate, or the
electronic equivalent of covers if the Document is in electronic form.
Otherwise they must appear on printed covers that bracket the whole
aggregate.


8. TRANSLATION

Translation is considered a kind of modification, so you may
distribute translations of the Document under the terms of section 4.
Replacing Invariant Sections with translations requires special
permission from their copyright holders, but you may include
translations of some or all Invariant Sections in addition to the
original versions of these Invariant Sections.  You may include a
translation of this License, and all the license notices in the
Document, and any Warranty Disclaimers, provided that you also include
the original English version of this License and the original versions
of those notices and disclaimers.  In case of a disagreement between
the translation and the original version of this License or a notice
or disclaimer, the original version will prevail.

If a section in the Document is Entitled "Acknowledgements",
"Dedications", or "History", the requirement (section 4) to Preserve
its Title (section 1) will typically require changing the actual
title.


9. TERMINATION

You may not copy, modify, sublicense, or distribute the Document except
as expressly provided for under this License.  Any other attempt to
copy, modify, sublicense or distribute the Document is void, and will
automatically terminate your rights under this License.  However,
parties who have received copies, or rights, from you under this
License will not have their licenses terminated so long as such
parties remain in full compliance.


10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions
of the GNU Free Documentation License from time to time.  Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.  See
http://www.gnu.org/copyleft/.

Each version of the License is given a distinguishing version number.
If the Document specifies that a particular numbered version of this
License "or any later version" applies to it, you have the option of
following the terms and conditions either of that specified version or
of any later version that has been published (not as a draft) by the
Free Software Foundation.  If the Document does not specify a version
number of this License, you may choose any version ever published (not
as a draft) by the Free Software Foundation.


ADDENDUM: How to use this License for your documents

To use this License in a document you have written, include a copy of
the License in the document and put the following copyright and
license notices just after the title page:

    Copyright (c)  YEAR  YOUR NAME.
    Permission is granted to copy, distribute and/or modify this document
    under the terms of the GNU Free Documentation License, Version 1.2
    or any later version published by the Free Software Foundation;
    with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
    A copy of the license is included in the section entitled "GNU
    Free Documentation License".

If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,
replace the "with...Texts." line with this:

    with the Invariant Sections being LIST THEIR TITLES, with the
    Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.

If you have Invariant Sections without Cover Texts, or some other
combination of the three, merge those two alternatives to suit the
situation.

If your document contains nontrivial examples of program code, we
recommend releasing these examples in parallel under your choice of
free software license, such as the GNU General Public License,
to permit their use in free software.