[MD] Transhumanism

Ian Glendinning ian.glendinning at gmail.com
Wed Jun 16 01:37:05 PDT 2010


Krim, need to unpick this ...

> [Krimel]
> I take Kurzweils basic point to be that humans will develop AIs that will
> help them develop smarter AIs. That has already happened BTW. Eventually AIs
> get to be smart enough to develop smarter AIs without help or AI+. Those AIs
> can then develop geometrically smarter and smarter AI++s. That is the
> "singularity" as I understand it.

First.
"I take Kurzweils basic point to be that humans will develop AIs that
will help them develop smarter AIs. That has already happened BTW."
Yes clearly, no brainer.

Second
"Eventually AIs get to be smart enough to develop smarter AIs without
help or AI+."
I get that is his point, but we are simply now debating "smart" ...
intelligence, intellect, etc ... No one (nothing) gets smarter without
help - static patterns around them, and patterns that are
interconnected with many more things than sheer processing power. A
very smart human doesn't get smarter using only their brain in
isolation ... at whatever speed an AGI might get smarter it too is
unlikely to get smarter in isolation either. It needs to evolve, and
the speed of evolution is to do with genetic and memetic reproduction
rates over the generations, not processing power in the current
generation.

A smart AGI that fails to recognize the value of humans, is like a
smart human failing to value their eco-environment ... won't last
long. The limiting factors are far more than processing capabilities
... at the intellectual level.

There are interpretations of "the singularity" that involve human-AI
symbiosis, that seem much more credible to me. Uploading or otherwise.
You seem to be living out your literary character Krimel :-)

Fewer sceptics at the 2010 event I see.
Ian



More information about the Moq_Discuss mailing list