log in | register | forums
Show:
Go:
Forums
Username:

Password:

User accounts
Register new account
Forgot password
Forum stats
List of members
Search the forums

Advanced search
Recent discussions
- Elsear brings super-fast Networking to Risc PC/A7000/A7000+ (News:)
- Latest hardware upgrade from RISCOSbits (News:)
- Announcing the TIB 2024 Advent Calendar (News:1)
- Code GCC produces that makes you cry #12684 (Prog:39)
- RISCOSbits releases a new laptop solution (News:)
- Rougol November 2024 meeting on monday (News:)
- Drag'n'Drop 14i1 edition reviewed (News:)
- WROCC November 2024 talk o...ay - Andrew Rawnsley (ROD) (News:2)
- October 2024 News Summary (News:3)
- RISC OS London Show Report 2024 (News:1)
Latest postings RSS Feeds
RSS 2.0 | 1.0 | 0.9
Atom 0.3
Misc RDF | CDF
 
View on Mastodon
@www.iconbar.com@rss-parrot.net
Site Search
 
Article archives
Acorn Arcade forums: Programming: Artificial intelligence evolution
 
  Artificial intelligence evolution
  (09:49 21/11/2001)
  Guy (14:07 21/11/2001)
    ToiletDuck (15:37 21/11/2001)
      andrew (10:54 22/11/2001)
        Guy (13:58 15/6/2002)
          andrew (15:03 26/11/2001)
            Loris (14:53 28/11/2001)
              Guy (13:58 15/6/2002)
                ToiletDuck (13:11 29/11/2001)
                  andrew (13:59 29/11/2001)
                    Phlamethrower (15:21 29/11/2001)
                    Loris (18:10 29/11/2001)
                Loris (13:58 15/6/2002)
                  Guy (16:10 30/11/2001)
                    Loris (15:21 1/12/2001)
                      Guy (17:29 3/12/2001)
                        Loris (12:50 6/12/2001)
                          Guy (15:15 6/12/2001)
                            Guy (16:52 6/12/2001)
                            Loris (13:58 15/6/2002)
                              Guy (14:05 12/12/2001)
      Phlamethrower (13:58 15/6/2002)
    andrew (10:13 22/11/2001)
  monkeyson (14:02 23/11/2001)
  johnstlr (13:58 15/6/2002)
    andrew (10:06 21/11/2001)
      johnstlr (10:24 21/11/2001)
 
andrew Message #4855, posted at 09:49, 21/11/2001
Unregistered user One area of programming that interests me is whether it is possible to ever create a program that would evolve in a similar way to life and gain complexity independently largely, of the user.
how could you even go about doing this?
For this to happen there would need to be a uge amount of variables for the program entity to interact with and they would have to change and pose a challenge to the entity?
I know nothing about AI theory but what do others think?
  ^[ Log in to reply ]
 
andrew Message #4857, posted at 10:06, 21/11/2001, in reply to message #4856
Unregistered user So from what you're saying there would have to be much feedback to the entity you create from the environment, in order for the entity to learn.
I'd be interested in tackling this myself from the start and seeing how far it could be taken. Presumably I'd soon find out ;-)
  ^[ Log in to reply ]
 
johnstlr Message #4858, posted at 10:24, 21/11/2001, in reply to message #4857
Unregistered user
So from what you're saying there would have to be much feedback to the entity you create from the environment, in order for the entity to learn.

Yes, because without feedback or stimulus there's nothing to learn about.


I'd be interested in tackling this myself from the start and seeing how far it could be taken. Presumably I'd soon find out ;-)

Try

http://library.thinkquest.org/2705/

  ^[ Log in to reply ]
 
Guy Message #4859, posted at 14:07, 21/11/2001, in reply to message #4855
Unregistered user There was a computer game released a while back which embodied groundbreaking AI features, much to the author's surprise (he hadn't found out that it couldn't be done, so he did it anyway). IIRC it is called Creatures.

Somebody runs autonomous communities of beasties, which evolve and periodically swap "genes" with other communities - each community on its own host, with hopes of boxing the things in tightly enough to release it on the internet (don't know if that stage is reached yet).

Wish I knew some urls for you.
hth anyway

  ^[ Log in to reply ]
 
Mark Quint Message #4860, posted by ToiletDuck at 15:37, 21/11/2001, in reply to message #4859
Ooh ducky!Quack Quack
Posts: 1016
i think the drawback with that approach is that in that case & other games the artificially intelligence can only really be "emulated" from what we already know, so we're already telling it too much information, which it should be developing & evolving itself.
In theory I suppose if you could just give an artificial 'thing' the meaning of life then it'd probably be sorted, although once again we would be telling it plainly from a human point of view. ?:/
  ^[ Log in to reply ]
 
andrew Message #4862, posted at 10:13, 22/11/2001, in reply to message #4859
Unregistered user
There was a computer game released a while back which embodied groundbreaking AI features, much to the author's surprise (he hadn't found out that it couldn't be done, so he did it anyway). IIRC it is called Creatures.

Somebody runs autonomous communities of beasties, which evolve and periodically swap "genes" with other communities - each community on its own host, with hopes of boxing the things in tightly enough to release it on the internet (don't know if that stage is reached yet).

Wish I knew some urls for you.
hth anyway

Yes, that's helpful. I think it has to be modelled on evolution theories and taken from there.

  ^[ Log in to reply ]
 
andrew Message #4863, posted at 10:54, 22/11/2001, in reply to message #4860
Unregistered user
i think the drawback with that approach is that in that case & other games the artificially intelligence can only really be "emulated" from what we already know, so we're already telling it too much information, which it should be developing & evolving itself.
In theory I suppose if you could just give an artificial 'thing' the meaning of life then it'd probably be sorted, although once again we would be telling it plainly from a human point of view. ?:/

I'm not sure there are many other ways you could go about it. I read on soembody's site, from a flipcode link, that Darwin's natural selection theory is important:
mos tindividuals have more than one offspring
there is variation among offspring
there are limited resources in the environment
and so on (I can't remember all of them).
These would make an important starting point although I think 1000s or millions of individuals would have to be represented to gain evolution but my point was really, how could the program be made to evolve e.g. how could a program begin to create new subroutines as it learnt things??
In terms of being an intelligence to interact with the programmer, there would have to be feedback constantly so realistically unless youve got years to spare you would have to allow intelligence to evolve /amongst/ individuals in the computers memory.

  ^[ Log in to reply ]
 
monkeyson Message #4865, posted at 14:02, 23/11/2001, in reply to message #4855
Unregistered user I'm doing a module on it currently:

http://www.comp.leeds.ac.uk/seth/ar35/

  ^[ Log in to reply ]
 
andrew Message #4866, posted at 15:03, 26/11/2001, in reply to message #4864
Unregistered user

The code is usually highly modular. these 'genes' can then be randomly modified, and the best ones picked for the next generation. Such genetic algorithms have even been used successfully to refine the design of jet engines.

Would it be a good idea to pick these properties of individuals or just to create a set of rules whereby those with the properties that could best use the environment to reproduce would survive?
If you took this concept from square one, what would these basic properties be and what would the properties of the environment be?
Would you allow the environment to be altered by the creatures in anyway?
There must be a finite limit on the complexity to the system as opposed to nature as you simply couldn't introduce all the possible variables in the environment that nature had when life began to evolve.

  ^[ Log in to reply ]
 
Loris Message #4867, posted at 14:53, 28/11/2001, in reply to message #4866
Unregistered user Hi folks,
this is my first post from a new ID; I seem to have forgotten not only my password but also my old username. Duh.

These topics are both very interesting to me.
I make a strong distinction between AI (artificial intelligence) and AL (artificial life).
I believe it to be possible to have either one without necessarily the other.

Would it be a good idea to pick these properties of individuals or just to create a set of rules whereby those with the properties that could best use the environment to reproduce would survive?
If you took this concept from square one, what would these basic properties be and what would the properties of the environment be?
Would you allow the environment to be altered by the creatures in anyway?
There must be a finite limit on the complexity to the system as opposed to nature as you simply couldn't introduce all the possible variables in the environment that nature had when life began to evolve.

Regarding AL, I think it should be possible to create a complex and interesting world, given only enough memory and processor power.
While it would be possible to introduce some complexity into the world, I think most of the interest would come from the interaction between different artificial organisms.
Contrary to what other people have said above, I don't think you would need millions of organisms for evolution to occur. For asexual species only very few organisms would be required, although for interesting developments more would be necessary. For sexual species, including higher organisms like tigers, it is generally held that around 500 individuals is the minimum population size necessary to avoid deleterious bottlenecks leading to extinction.
Generating interesting creatures depends on the code by which creatures are defined - their genetic code. A very interesting topic I won't go into here...
Except to say that IMHO for 'real' AL it should be read (or interpreted) through a function rather than a set of hardwired attributes.

I am also interested in AI, particularly neural nets. To me these appear to have two problems; one is pragmatic, the other theoretical.
Firstly, for a powerful conciousness a significant amount of memory and processing power appears necessary.
More interestingly, the main problem as I see it is: How do you train your neural network?
There are several methods people have used for this with different sucess and biological imperative.
1) Back propagation
If you know what the output you want is, you can modify the connections in such a way that you will get that result next time.
Works if you have a set of training data.
no biological explanation or equivelent.
I don't like this method really - it is cheating.
2) Modification and selection
After each failed test, make a change and see if the network performs better.
The correct behaviour is evolved. Can be considered a hybrid artificial life/intelligence.
Doesn't generally work too quickly.
3) Reward and punishment
This is the method I'm thinking of using..
You might think it really fits elsewhere, but never mind.
With some training simulation (ie in a continuously proceeding time-scheme)
Reward for suitable behaviour, punish for unsuitable behaviour.
Rewarding involves increasing strength of connections firing strongly, decreasing strength weak connections. Punishment is the reverse.
I consider this the most realistic, but should probably point out that I'm not an expert in the field, haven't even built a neural net yet, and this might not work. YMMV.

E&OE.
Hope I didn't tread on anyones toes.
Yours,
Tony

  ^[ Log in to reply ]
 
Mark Quint Message #4869, posted by ToiletDuck at 13:11, 29/11/2001, in reply to message #4868
Ooh ducky!Quack Quack
Posts: 1016
hrm
id say its a two way thing - you get a mutation somewhere that alters a property, then comes 'competition' where the most successful 'thing' will survive & reproduce, allowing for more mutations....
as opposed to trial-and-error, so you'd need to set a range of rules that restricted the 'thing' into a certain environment, & when it broke a rule, it 'dies'.
  ^[ Log in to reply ]
 
andrew Message #4870, posted at 13:59, 29/11/2001, in reply to message #4869
Unregistered user In terms of AL, then selection would mean using the environment in the best way to enable survival and reproduction.
The question is - what would the properties of the entities and the environment be:
e.g. a v.simple system, entities have ability to reproduce given that they can find limited food. There must be another property they have however which they can vary to compete with the other organisms and wouldn't this eventually lead to 1 winner as opposed to a healthy population of Artificial lifeforms?
  ^[ Log in to reply ]
 
Phlamethrower Message #4871, posted at 15:21, 29/11/2001, in reply to message #4870
Unregistered user How about Lemmings?

A simple 2D environment designed by you, and inhabited by little critters that have to try and stay alive, while constantly keeping on the move.

This means they'd have two goals - to keep alive, and to keep moving. Giving each lemming a simple neural net (Or similar fuzzy logic function) would give them the brains they need to survive.

Traps for the lemmings could be simple things like long falls and water, or more complex things like crushing ceilings and roaming animals. This should hopefully lead to some kind of urge to stay in a group, for protection against the roaming animals.

If you added skills which the lemmings could randomly learn (e.g. bridge building), then they could influence the environment and build up some kind of ants nest like colony. In fact, you could have it based around something like an ants nest - food is on the outside, and safety from predators is on the inside.

Hmm, I might have a shot at this myself...

  ^[ Log in to reply ]
 
Loris Message #4872, posted at 18:10, 29/11/2001, in reply to message #4870
Unregistered user
In terms of AL, then selection would mean using the environment in the best way to enable survival and reproduction.
The question is - what would the properties of the entities and the environment be:
e.g. a v.simple system, entities have ability to reproduce given that they can find limited food. There must be another property they have however which they can vary to compete with the other organisms and wouldn't this eventually lead to 1 winner as opposed to a healthy population of Artificial lifeforms?

Very interesting points.
Regarding Artificial Life:
I too would favour a very simplistic environment,
however... One thing which promotes diversification (and hence speciation) is a different environment. This could be as simple as a wide open space, a space with a few small barriers in, a complex maze (but probably with no long dead-ends) etc. An experiment I remember described showed that while in a tray of flour only one species (of flour-beetle) could stably survive, the introduction of some glass tubes allowed the co-existance of two species.

The interest in this microcosm comes from the interaction between different creatures, and it is from this that the driving force for evolution can be maintained. This is known as an 'arms race'.
Thus there may evolve predators which prey on the other creatures, adaptations to avoid predators etc.

However, it is also clear that the evolutionary scheme must be open-ended. For this to be the case it is simply not acceptable to have a series of specified variables which you call genes. Given this, I'd suggest copying nature and giving each organism a 'genome' specified by one or more linear (or potentially circular!) strings of information.
This then requires some method of interpretation by which the full phenotype (every apparent thing about that organism) can be derived. Deciding on a scheme which will provide interesting developments is perhaps the most difficult part, and one which I'd like to discuss further, if anyone is interested. Such a genetic scheme need not bear any resemblance that of biological organisms. In fact I think it would be best to cut to the chase and have a fairly direct process of interpretation.

  ^[ Log in to reply ]
 
Guy Message #4874, posted at 16:10, 30/11/2001, in reply to message #4873
Unregistered user
How do you distinguish the boundary between AI and AL? Aren't evolution and trial-and-error learning are very similar algorithms?
Being as its Friaday afternoon I'll try and clarify my point (yes I know I should have got it right first time so no need for this post but there, a little trial-and-error is needed to help evolve my abilities).

memetics is the study of ideas as evolutionary entities - a meme is the ideal (in the sense of ideas, nor perfection) equivalent of a gene.

Both algorithms involve creating memes which improve with successive generations, here expressed as code. How we go about creating the memes and their environment is going to be pretty much the same, whether the code memes make up an intelligence or indirectly represent life-forms. they are still memes and obey the rules of memetics. True, AL goes for quantity whereas AI goes for quality, but this difference is trivial - isn't it?

  ^[ Log in to reply ]
 
Loris Message #4875, posted at 15:21, 1/12/2001, in reply to message #4874
Unregistered user I'm going to reply to this bit by bit..

memetics is the study of ideas as evolutionary entities - a meme is the ideal (in the sense of ideas, nor perfection) equivalent of a gene.

I'm not aware of 'memetics' as a fully fledged discipline as you describe. I thought memetics referred to the study of memory.
'Memes' as I know them were described by Richard Dawkins in one chapter of one of his books - "The Blind Watchmaker" perhaps? IIRC he made it clear that it was an analogy only, albeit one with some interest.

Both algorithms involve creating memes which improve with successive generations, here expressed as code.

By both algorithms you mean evolution by natural selection and the theory of memes?
While I certainly appreciate meme theory, I'm having some trouble comprehending your points. But I think I now understand.

The theory of memes is a concept of how some ideas 'survive' by spreading between people, and others don't. It doesn't say anything about the actual intelligence itself. In particular, it says nothing about how ideas are encoded. Of course, if we knew how to encode and decode ideas we could build an intelligence. But this is a decidedly tricky problem, and to my mind at least it is not likely to involve a linear encoding method (like biological genes). In real-life brains, I believe it to involve connection strengths (amongst other variables) within networks of neurones.

How we go about creating the memes and their environment is going to be pretty much the same, whether the code memes make up an intelligence or indirectly represent life-forms. they are still memes and obey the rules of memetics. True, AL goes for quantity whereas AI goes for quality, but this difference is trivial - isn't it?

If you agree with what I say in the preceeding paragraph, that isn't really true. Memes are ideas, OK, but the meme theory was put forward as a comment about the communication of ideas. It doesn't impact on the representation of ideas internal to the brain.

  ^[ Log in to reply ]
 
Guy Message #4876, posted at 17:29, 3/12/2001, in reply to message #4875
Unregistered user
I'm not aware of 'memetics' as a fully fledged discipline as you describe. I thought memetics referred to the study of memory.
'Memes' as I know them were described by Richard Dawkins in one chapter of one of his books - "The Blind Watchmaker" perhaps? IIRC he made it clear that it was an analogy only, albeit one with some interest.
Richard Dawkins' throwaway analogy did indeed spawn a new science of memetics - try typing "memetics" into a search engine, or read Susan Blackmore's The Meme Machine. Any piece of information may be regarded as a meme which reproduces, evolves and/or dies over time. The human mind may be regarded as just a collection of memes (memeplex) inhabiting the brain.

Both algorithms involve creating memes which improve with successive generations, here expressed as code.
By both algorithms you mean evolution by natural selection and the theory of memes?
Actually, I meant the algorithms used to create AI and AL.

My point is that at a given level of sophistication, whether you sit down to write AL software or AI software, the code will be pretty similar. Just as the backdrop to AL entities is the host environment, so the backdrop to AI ideas is the "brain substrate". An AI is judged by its ideas not its substrate, just as an AL environment is judged by its life forms not its environment. nevertheless, both the environment and the brain are essential to the genetic/memetic inhabitants.
Memetics is one way of formalising that commonality - because AL and AI are just information in a box, they are memeplexes, and pretty much the same memeplex at that.

it is not likely to involve a linear encoding method (like biological genes). In real-life brains, I believe it to involve connection strengths (amongst other variables) within networks of neurones.
Don't you need both? - the genes are used to evolve better sets of reinforced connections, and maybe better ways of reinforcing connections (ie better ways of learning).
meme theory ... doesn't impact on the representation of ideas internal to the brain.
I refer you back to Susan Blackmore, whose "meme machine" is indeed the human brain.
  ^[ Log in to reply ]
 
Loris Message #4877, posted at 12:50, 6/12/2001, in reply to message #4876
Unregistered user Hope I get the quoting system right here - it is a bit fiddly isn't it?

I'm not aware of 'memetics' as a fully fledged discipline as you describe. I thought memetics referred to the study of memory.
'Memes' as I know them were described by Richard Dawkins in one chapter of one of his books - "The Blind Watchmaker" perhaps? IIRC he made it clear that it was an analogy only, albeit one with some interest.

as an aside, I should point out here I was thinking of 'mnemonics' which are aids to memory or a system for improving memory. Not really a scientific discipline, an Asimov short story notwithstanding. (Duh.)

Richard Dawkins' throwaway analogy did indeed spawn a new science of memetics - try typing "memetics" into a search engine, or read Susan Blackmore's The Meme Machine. Any piece of information may be regarded as a meme which reproduces, evolves and/or dies over time. The human mind may be regarded as just a collection of memes (memeplex) inhabiting the brain.

I've just put "memetics" into google, and it does indeed come up with a few pointers, the first one being a peer reviewed electronic journal. I've not read the book you mention, although I'll look out for it. Please understand that I'm not criticizing meme theory, indeed I believe it to be true. However this doesn't affect my point that memetics regards the transmission of ideas, whereas AI involves the generation of ideas. I'll use 'memetics' instead of 'meme theory' henceforth in this forum.

Both algorithms involve creating memes which improve with successive generations, here expressed as code.
By both algorithms you mean evolution by natural selection and the theory of memes?
Actually, I meant the algorithms used to create AI and AL.
You mean: "both AI and AL involve transmission of information"? (lets not confuse genes and memes). I don't agree entirely. Dawkins suggested memes for transfer of ideas between people (or conciousnesses). AI need not necessarily have this. However a truely consious AI would presumably evolve ideas internally. But this is not really the 'spirit' of memetics because this does away with the point. The idea was that memes can be 'selfish', just like genes. An idea can be wrong, but still spread widely because of its properties. Without communication, this can not be the case. Ideas can be right or wrong, but they don't progagate better because of anything they are.

My point is that at a given level of sophistication, whether you sit down to write AL software or AI software, the code will be pretty similar.

Sorry to cut in here but I'd deny this.

Just as the backdrop to AL entities is the host environment, so the backdrop to AI ideas is the "brain substrate". An AI is judged by its ideas not its substrate, just as an AL environment is judged by its life forms not its environment. nevertheless, both the environment and the brain are essential to the genetic/memetic inhabitants.

I think that is all well and good, but irrelevant to your point (for the reason I gave above).

Memetics is one way of formalising that commonality - because AL and AI are just information in a box, they are memeplexes, and pretty much the same memeplex at that.
it is not likely to involve a linear encoding method (like biological genes). In real-life brains, I believe it to involve connection strengths (amongst other variables) within networks of neurones.

Don't you need both? - the genes are used to evolve better sets of reinforced connections, and maybe better ways of reinforcing connections (ie better ways of learning).

That is the difference. I don't see why I'd need to encode genes if I was constructing an electronic brain. Trying to evolve better electronic brains, maybe. A variation on the learning process (reportedly not an efficient one) maybe. But I don't think they are necessary for intelligence. I fully intend to just use node connection strengths, and pleasure/pain for learning.

meme theory ... doesn't impact on the representation of ideas internal to the brain.
I refer you back to Susan Blackmore, whose "meme machine" is indeed the human brain.I will try and read this at some point, it sounds interesting.
  ^[ Log in to reply ]
 
Guy Message #4878, posted at 15:15, 6/12/2001, in reply to message #4877
Unregistered user
Hope I get the quoting system right here - it is a bit fiddly isn't it?
:/
as an aside, I should point out here I was thinking of 'mnemonics' which are aids to memory or a system for improving memory.
I sympathise. I can never remember mnemonics.

... my point that memetics regards the transmission of ideas, whereas AI involves the generation of ideas.
Memetics *does* include generation (even if Dawkins' oringinal analogy didn't): this is because it is based on the full darwinian evolutionary process, which includes reproduction - and transmission is the meme's main reproduction mechanism.

AIUI, this rather changes your subsequent comments.

I don't see why I'd need to encode genes if I was constructing an electronic brain. Trying to evolve better electronic brains, maybe. A variation on the learning process (reportedly not an efficient one) maybe. But I don't think they are necessary for intelligence. I fully intend to just use node connection strengths, and pleasure/pain for learning.
Some theories hold that memetic evolution *defines* the intelligent learning process (as opposed to reward/punishment 'blind' learning) - and how can you constuct an intelligent brain without it having evolved its ideas in just such an intelligent learning process?
I am not convinced that blind learning is the only kind there is.
  ^[ Log in to reply ]
 
Guy Message #4879, posted at 16:52, 6/12/2001, in reply to message #4878
Unregistered user
<blockquote>... my point that memetics regards the transmission of ideas, whereas AI involves the generation of ideas.</blockquote>
Memetics *does* include generation (even if Dawkins' oringinal analogy didn't): this is because it is based on the full darwinian evolutionary process, which includes reproduction - and transmission is the meme's main reproduction mechanism.
I meant to say, and generation of a new copy (which may or may not mutate) is the first step in transmission.
  ^[ Log in to reply ]
 
Guy Message #4881, posted at 14:05, 12/12/2001, in reply to message #4880
Unregistered user The Meme Machine is better argued than my postings, so I won't answer your points individually. I'll just say that you seem to be using Dawkins' original idea as a guide. Don't - memes are more sophisticated and can indeed do the things you doubt.

(I notice there is an Edit message button)
Not when I disable cookies - TIB then doesn't recognise me so omits the button.

What does AIUI mean?
As I Understand It. Sorry for the geekology.
  ^[ Log in to reply ]
 
Loris Message #4880, posted at 13:58, 15/6/2002, in reply to message #4878
Unregistered user
... my point that memetics regards the transmission of ideas, whereas AI involves the generation of ideas.
Memetics *does* include generation (even if Dawkins' oringinal analogy didn't): this is because it is based on the full darwinian evolutionary process, which includes reproduction - and transmission is the meme's main reproduction mechanism.
**I meant to say, and generation of a new copy (which may or may not mutate) is the first step in transmission.**-Message transfered by Tony. (I notice there is an Edit message button)

Are you saying that this occurs inside one individual? This seems a bit odd to me, as it would suggest that new ideas are caused by duplication of old ideas (with modification). Thus to develop an idea memes would breed inside the brain to create more and more. In my observations of my own thinking process this does not seem to be how it works. It seems that developing of ideas is done by extending (adding corollaries, comments or links to other ideas) rather than duplicating and modifying.

AIUI, this rather changes your subsequent comments.

What does AIUI mean?
and not if I deny it. wink

You don't seem to have addressed the points which you snipped, so I'll repeat them here:

Dawkins suggested memes for transfer of ideas between people (or conciousnesses). AI need not necessarily have this. However a truely consious AI would presumably evolve ideas internally. But this is not really the 'spirit' of memetics because this does away with the point. The idea was that memes can be 'selfish', just like genes. An idea can be wrong, but still spread widely because of its properties. Without communication, this can not be the case. Ideas can be right or wrong, but they don't progagate better because of anything they are.
To this perhaps I could add that if memes functioned internally, it would be very easy for a 'viral' form to be created which would take over the whole brain with only copies of itself!

***

I don't see why I'd need to encode genes if I was constructing an electronic brain. Trying to evolve better electronic brains, maybe. A variation on the learning process (reportedly not an efficient one) maybe. But I don't think they are necessary for intelligence. I fully intend to just use node connection strengths, and pleasure/pain for learning.

I should point out that anything I make for the foreeable future will probably not be deemed concious. But it might exhibit learning behaviour.

Some theories hold that memetic evolution *defines* the intelligent learning process (as opposed to reward/punishment 'blind' learning) - and how can you constuct an intelligent brain without it having evolved its ideas in just such an intelligent learning process?

You seem to have missed my point here. My claim is that memes don't require genes; I hold it is theoretically possible to create a sapient AI entity without genetic evolution. This doesn't say anything about how it develops ideas.

I am not convinced that blind learning is the only kind there is.

If you are looking for forsight, Darwinian evolution is not the way to go!
Regarding reward/punishment learning:
It seems that people develop through a stage of soley this very early on, and from then on also use deduction.
All the things people do that I can think of are goal oriented. It is the obtaining pleasure (and avoiding pain) of whatever sort which is aimed for. But the type of reward may be very sophisticated. Maybe rewards are in layers, with each higher level feeding through to the one(s) below. At the bottom would be physical requirements - food+drink, physical comfort and biological imperatives like sex. Above this would be making other people happy, being polite, honourable etc. An example of a level above that might be solving basic maths puzzles. And above that? Ways of solving puzzles.
Sorry this para goes on; I got carried away.

Learning though being taught (with occasional mistakes) is what memes are about, while this is certainly important in human civilisation it is not sufficient for conciousness. If anything could only learn in this way and not deduce anything I'd say they probably wern't. This is why I can't see memes as a basis for conciousness.
For what it is worth, the same goes for basic reward/punishment learning. But I can see such training being the basis for the development of higher learning skills.

  ^[ Log in to reply ]
 
Guy Message #4868, posted at 13:58, 15/6/2002, in reply to message #4867
Unregistered user
Would it be a good idea to pick these properties of individuals or just to create a set of rules whereby those with the properties that could best use the environment to reproduce would survive?
For general AL I think creating rules is a much better idea.

Would you allow the environment to be altered by the creatures in anyway?
It'd be nice to. Not sure how. Would there be a danger of recursive definition of life form, in the sense that Gaia is an environment that has gained at least some characteristics of a life form?
There must be a finite limit on the complexity to the system as opposed to nature as you simply couldn't introduce all the possible variables in the environment that nature had when life began to evolve.
OTOH you could evolve it a lot faster, so maybe it'd catch up and get even more complex.
Hope I didn't tread on anyones toes.
More like passed over my head wink
How do you distinguish the boundary between AI and AL? Aren't evolution and trial-and-error learning are very similar algorithms?
  ^[ Log in to reply ]
 
Loris Message #4873, posted at 13:58, 15/6/2002, in reply to message #4868
Unregistered user
How do you distinguish the boundary between AI and AL? Aren't evolution and trial-and-error learning are very similar algorithms?

I hope people don't mind my posting a second reply...

Evolution as a word basically just means change.. Suns evolve, as do individual people, pockemon cool etc. However it seems to have appropriated the full sense of Darwinian evolution; evolution by natural selection.
I hope I can take it that most people know and accept this, but in brief:
If a population contains individuals with different attributes, then they may show different fitnesses (ability to produce viable offspring over their lifetime). Over time the 'most fit' (those who produce most surviving offspring) will take over the population. For this to be true the differences must be inherited. If any form of mating occurs the differences must be discrete (digital rather than analogue) mutations must occur if evolution is to continue.

Given all of that, we could make varieties of artificial life in the computer, and if the simulation were really big, perhaps eventually artificial intelligence might evolve. But that would be some way off.

Now, by artificial intelligence I guess we are really talking about artificial conciousness as an ideal end stage. What has been done so far using neural nets is the learning of responses.
I hope you would also agree that AI may not require AL. We could make a concious being which did not artificially evolve by natural selection - or indeed any kind of selection. Certainly, I'd agree it would be possible to codify a neural network as some huge 'genetic sequence', then apply selection in an attempt to evolve intelligence (or at least some behaviour). This has been sucessfully tried for some tasks.
But this isn't how our brains develop. People don't start with all the knowledge of their parents (or half from each, mixed up!) and some mutant thoughts.
You start off with very little knowledge, then undergo learning. In a single neural network, (it appears that) knowledge and behaviour are developed by modifying the strengths of the connections between neurones.

Does this make any sense?

  ^[ Log in to reply ]
 
Guy Message #4864, posted at 13:58, 15/6/2002, in reply to message #4863
Unregistered user Yes, Darwinian evolution is fundamental:
- replication
- variation
- selection

Doesn't matter so much how you implement each of these, but the whole thing has to be allowed to snowball for many generations.

The code is usually highly modular. these 'genes' can then be randomly modified, and the best ones picked for the next generation. Such genetic algorithms have even been used successfully to refine the design of jet engines.

And yes, as a minimum several hundred beasties must compete over dozens or hundreds of generations. The broader the ground rules, the more generations are needed to get anywhere recognisable.

The amount of human knowledge embedded in the thing must be carefully judged, especially in the ground rules - too little and the rules will not achieve a workable relationship, too much and you will be confined to refining jet engines or whatever.

Creatures came out some years ago, no doubt other games have 'evolved' the concept further shock)

I don't think constant interaction with the programmer is necessary, but there will be key moments when it is vital. Sony's Aibo needs careful attention when new, but can play or sleep happily for ages once it has "grown up".

  ^[ Log in to reply ]
 
Phlamethrower Message #4861, posted at 13:58, 15/6/2002, in reply to message #4860
Unregistered user I read somewhere that the AI in creatures was actually quite limited... they just try something, and unless you punish them for it they'll keep doing it. Once you've punished them, they won't do it ever again.

This was on a Black & White site, so might be a bit biased...

But the AI in B&W is (from what I understand) a mix of loads of different techniques. One feature it is missing though is an actual model of the game world - the creatures have no idea what the side effects of their actions are, they just know that when they see a hungry villager they have been taught to feed it (Or throw it around a bit wink)

I found a demo of a old PC game that had you controlling an AI like form... I can't remember what the game was called, but the creature (a robot with range finder type sensors) was called Mendel, and you interacted with it by clicking on different sides of its body. The aim was to try and guide it through a 3D maze of deadly traps, and the game came with several preset brains at different stages of development - from a 'newborn' up to one several hours old which is quite capable of walking round without falling off ledges or anything. Also if it did get killed, it gets replaced by one with a slightly modified brain, so if you kill it too much it turns neurotic.

*Searches through Windows folder for uninstall leftovers*

Found it - the game's called Galapagos. Quite a good example of how to do stuff like that, IMHO.

A quick search comes up with this site:

http://pc.hotgames.com/games/galapa/download.htm

Neither EA or Anark are listing Galapagos any more, so the demo seems like your best bet if you want a look at it.

Of course if you want *proper* evolution, you'd need a colony of several million AIs that get random mutations and are basically in a match for survival of the fittest.

Also I read somewhere about using Conway's game of Life to model AI - you can set up certain patterns which respond to stimulus, although like with a neural net you'd probably need millions of cells for it do anything useful. Unfortunately the website I found it on seems to have just died unhappy

Anyway, that's my 2p wink

  ^[ Log in to reply ]
 
johnstlr Message #4856, posted at 13:58, 15/6/2002, in reply to message #4855
Unregistered user
One area of programming that interests me is whether it is possible to ever create a program that would evolve in a similar way to life and gain complexity independently largely, of the user.
how could you even go about doing this?

Well some microwaves use a neural net to control the software which decides how long to cook your food for when you set it to do it automatically. These nets had to learn how to do this - they've just been frozen so they don't continue learning.

Some work at Lancaster University "evolved" custom network protocols from microprotocols (ie protocol elements such as flow control, point to point transmission, reliability, ordering) based on user requirements. This was achieved by determining a "fitness function" which described the protocol requirements. The program then created a series of protocols, tested them against the fitness function and used the best to generate a new generation of protocols.

I'm a big advocate of this approach. Think about it, from a relatively small amount of microprotocols you could, in theory, evolve a custom protocol for every situation. No more need to standardise protocols and optimal network performance at all times.

Unfortunately the code to do this took over a day to generate a reliable multicast protocol so we're not quite at realtime protocol generation yet. cool

This is more genetic programming than true AI though.


For this to happen there would need to be a uge amount of variables for the program entity to interact with and they would have to change and pose a challenge to the entity?
I know nothing about AI theory but what do others think?

I don't know too much more either. I'm pretty sure you'll find a wealth of information by searching with google though.

  ^[ Log in to reply ]
 

Acorn Arcade forums: Programming: Artificial intelligence evolution