Welcome, Guest. Please login or register. Did you miss your activation email?

Author Topic: ERL  (Read 4628 times)

0 Members and 1 Guest are viewing this topic.

lolz123

  • Sr. Member
  • ****
  • Posts: 260
    • View Profile
ERL
« on: July 01, 2014, 06:54:01 am »
Hello!

I have yet another AI project that uses SFML to share with you!
It is still in very early stages, but I thought I should post early, in case some of you want to help with the development  ;)

ERL stands for Evolved Reinforcement Learner. It is a attempt at creating large scale artificial general intelligence. ERL follows a different philosophy from existing AGI approaches. Instead of trying to study the brain and re-create it, it uses evolution to evolve a brain from scratch that is hopefully better suited for computer simulation.

ERL essentially sets up a "brain sandbox", in which vast numbers of different learning algorithms and architectures can exist. It then evolves to find the individual that maximizes cumulative reward on a set of experiments. So, it evolves a reinforcement learning (reward-seeking) agent, giving it its name.

In order to speed things up, ERL uses OpenCL for executing the brain in parallel.

ERL uses SFML (or will use, it is still early on  ::) ) to display information on the AI itself, a plot of the gathered reward, and the experiments that are being run. Depending on how complicated it ends up getting, we may use something like SFGUI to help out with the interface. If any of you SFML coders out there would like to lend a hand and help develop the visualization system, please let me know!

ERL was conceived in an AGI discussion group. It has around 70 members. If you would like to join the group, PM me! (we use Slack for chat)

For more technical details on how ERL works as well as access to the source, see the GitHub project: https://github.com/222464/ERL (the readme has a more in-depth description of how ERL works)

The following features have been implemented so far:
  • OpenCL framework
  • Compositional pattern producing network generation, evolution, and CL code generation
  • Brain Genotype and corresponding OpenCL kernel compiler
  • CMake support

ERL uses OpenCL meta-programming to support genetic programming (evolved code). Here is some example generated code:

/*
ERL

Generated OpenCL kernel
*/


// Dimensions of field
constant int fieldWidth = 10;
constant int fieldHeight = 10;
constant float fieldWidthInv = 0.100000;
constant float fieldHeightInv = 0.100000;

// Connection offsets
constant char2 offsets[25] = {
        (char2)(-2, -2), (char2)(-2, -1), (char2)(-2, 0), (char2)(-2, 1), (char2)(-2, 2),
        (char2)(-1, -2), (char2)(-1, -1), (char2)(-1, 0), (char2)(-1, 1), (char2)(-1, 2),
        (char2)(0, -2), (char2)(0, -1), (char2)(0, 0), (char2)(0, 1), (char2)(0, 2),
        (char2)(1, -2), (char2)(1, -1), (char2)(1, 0), (char2)(1, 1), (char2)(1, 2),
        (char2)(2, -2), (char2)(2, -1), (char2)(2, 0), (char2)(2, 1), (char2)(2, 2)
}

// Connection update rule
void connectionRule(float input0, float input1, float input2, float* output3) {
        *output3 = sin(-3.295338 * input0 + 5.266167 * input1 + -3.366515 * input2 + 1.680196);
}

// Activation update rule
void activationRule(float input0, float input1, float input2, float* output3) {
        *output3 = sigmoid(-3.536474 * input0 + 4.128592 * input1 + 2.613249 * input2 + 3.062753);
}

// Data sizes
constant int nodeAndConnectionsSize = 78;
constant int connectionSize = 3;
constant int nodeSize = 3;

// The kernel
void kernel nodeUpdate(global const float* source, global float* destination, read_only image2d_t randomImage, float2 randomSeed) {
        int nodeIndex = get_global_id(0);
        int nodeStartOffset = nodeIndex * nodeAndConnectionsSize;
        int connectionsStartOffset = nodeStartOffset + nodeSize;
        int2 nodePosition = (int2)(nodeIndex % fieldWidth, nodeIndex / fieldHeight);
        float2 normalizedCoords = (float2)nodePosition * float2(fieldWidthInv, fieldHeightInv);

        // Update connections
        float responseSum0 = 0;

        for (int ci = 0; ci < numConnections; ci++) {
                int2 connectionNodePosition = nodePosition + offsets[ci]

                // Wrap the coordinates around
                connectionNodePosition.x = connectionNodePosition.x % fieldWidth;
                connectionNodePosition.y = connectionNodePosition.x % fieldHeight;
                connectionNodePosition.x = connectionNodePosition.x < 0 ? connectionNodePosition.x + fieldWidth : connectionNodePosition.x;
                connectionNodePosition.y = connectionNodePosition.y < 0 ? connectionNodePosition.y + fieldHeight : connectionNodePosition.y;

                int connectionNodeIndex = connectionNodePosition.x + connectionNodePosition.y * fieldWidth;
                int connectionNodeStartOffset = connectionNodeIndex * nodeAndConnectionSize;
                int connectionStartOffset = connectionsStartOffset + ci * connectionSize;

                float response0;

                connectionRule(source[connectionStartOffset + 0], source[connectionStartOffset + 1], read_imagef(randomImage, connectionNodePosition + nodePosition).x, &response0);

                // Add response to sum and assign to destination buffer
                destination[connectionNodeStartOffset + 0] = response0;

                // Accumulate response
                responseSum0 += response0;

                // Assign recurrent values to destination buffer
        }

        float output0;

        activationRule(responseSum0, source[nodeStartOffset + 1], read_imagef(randomImage, nodePosition + int2(-1, -1)).x, &output0);

        // Assign to destination buffer
        destination[nodeStartOffset + 0] = output0;

        // Assign recurrent values to destination buffer
}
 
Have you heard about the new Cray super computer?  It’s so fast, it executes an infinite loop in 6 seconds.

josh123

  • Newbie
  • *
  • Posts: 17
    • View Profile
Re: ERL
« Reply #1 on: July 05, 2014, 10:21:37 pm »
The combination of OpenCL and evolutionary machine learning algorithms seems interesting, but I'm not exactly sure if I understand what ERL does differently from existing evolutionary algorithms. Are you saying that ERL searches for the correct algorithm and/or topology that solves a given learning task like vision or auditory recognition? I'm about to begin working for a professor who developed HyperNEAT (evolving CPPNs to find optimal neural networks), is ERL much different (other than the reinforcement learning part)?

This is gonna sound a bit weird, but we went to the same high school, I've since switched mainly to AI rather than video games, but I still like SFML for creating simulations. You may remember that I told you SDL 2 was about to come out around 2012, and you though the project was dead. I told you it would eventually be released.  8)

Anyway have you applied ERL to do any cool learning tasks other the three you listed on github?

lolz123

  • Sr. Member
  • ****
  • Posts: 260
    • View Profile
Re: ERL
« Reply #2 on: July 06, 2014, 01:06:05 am »
Quote
Are you saying that ERL searches for the correct algorithm and/or topology that solves a given learning task like vision or auditory recognition?
Yes. ERL doesn't just evolve a network, it evolve the rules that govern the dynamics of the network. So it is not like NEAT or HyperNEAT, whose networks are static once evolved. ERL actually uses CPPNs evolved by NEAT under the hood (not HyperNEAT, it is inappropriate for this task).

Quote
This is gonna sound a bit weird, but we went to the same high school
I think I remember. I am glad to see that you have become interested in AI as well!

Quote
Anyway have you applied ERL to do any cool learning tasks other the three you listed on github?
So far it hasn't been applied to anything yet, since it isn't done yet ;)

Quote
I told you it would eventually be released
Yep, you were right! It looked dead at the time though, you have to admit ;)
Have you heard about the new Cray super computer?  It’s so fast, it executes an infinite loop in 6 seconds.

josh123

  • Newbie
  • *
  • Posts: 17
    • View Profile
Re: ERL
« Reply #3 on: July 06, 2014, 03:26:36 am »
Pretty cool, I'll keep checking out this project. I'm considering using SFML as well in a research project I will be working on (a sort of reinforcement learning with an autoencoder). Maybe ERL can inform our research or the other way around. Regardless interesting project.

lolz123

  • Sr. Member
  • ****
  • Posts: 260
    • View Profile
Re: ERL
« Reply #4 on: July 06, 2014, 04:27:50 am »
Do you mean this? http://eplex.cs.ucf.edu/papers/pugh_alife14.pdf

I have a working version of it, with some enhancements, if you want to check it out.
Have you heard about the new Cray super computer?  It’s so fast, it executes an infinite loop in 6 seconds.

josh123

  • Newbie
  • *
  • Posts: 17
    • View Profile
Re: ERL
« Reply #5 on: July 06, 2014, 03:14:40 pm »
I guess you are interested in our research or similar projects ;D. Ya that would be cool to see your implementation. I'm surprised reading the paper was enough to create an implementation, they are usually somewhat vague. Studying the source is usually better. EDIT: I found it.
« Last Edit: July 06, 2014, 03:17:37 pm by josh123 »

 

anything