I know there are a lot of people excited for the singularity. I would be too, if I thought it to be plausible. By all means prove me wrong and achieve it. I’d love to be wrong on this one.
For those that don’t know: The singularity is the point at which we can create AI that is smarter than us, to the extent that it can significantly improve itself faster than we can.
I highly recommend you go read about it. As that paltry explanation doesn’t really do the subject justice.
This is why I think it’s not going to happen:
- Humans create an AI
- That AI must be able to improve itself somehow
- Whilst increased speed is initially sufficient. Ultimately, the AI will need to improve it’s intelligence and thus: improve the amount it can improve itself
- All improvements are sufficient to overcome any, and all physical and or technical limitations
- This process can continue for sufficient iterations, in a reasonable time frame, for a seemingly infinite technical advance (as t -> infinity)
Let me break this down:
Humans create an AI
Sure. There are no definitions here. There are no problems here.
That AI must be able to improve itself somehow
We do have evolving algorithms and learning algorithms that get better at their jobs. So, again, technically there are no problems here.
Whilst increased speed is initially sufficient. Ultimately, the AI will need to improve it’s intelligence and thus: improve the amount it can improve itself
I think it’s quite conceivable that an algorithm will be able to make itself “faster”. Self optimisations, even designing custom faster hardware are reasonable plausible. However the AI will have to be able to “improve” itself beyond our initial tinkering. Otherwise it will forever be limited to the level of the human designers.
We do not have any significant AI that can spew out another AI (of greater intelligence). The issue here is that it has taken us so many hundreds of years, and we’re not even at the point when we can design something EQUAL to our brains. You’re assuming that the best thing that we can conceive of, could do better than us. And assuming that for (ostensibly) infinity iterations. Which seems unlikely. Even if you assume that the iterations will end at some point. All the steps before are not trivial.
All improvements are sufficient to overcome any, and all physical and or technical limitations
Each necessary improvement may require a tech shift. Like floppy disks to CDs to DVDs. Perhaps the AI needs new hardware and new tech. It needs to have that available, or be able to produce, design and build. There would have to be NO limitations in what it could feasibly DO. Which is not exactly a simple proposition. How can we provide the AI something we know nothing about. It doesn’t seem very “singularity”-y if we have to get involved every now and then. Life has a way of throwing curve balls at us. I see no reason why that shouldn’t happen to an AI too. The unknown here, is an unwieldy beast.
This process can continue for sufficient iterations, in a reasonable time frame, for a seemingly infinite technical advance (as t -> infinity)
Deep thought took 7 and a half million years to deduce the answer for “Life, the Universe and Everything”.
Even if we assume all the above is “possible”. The time it could take for all of this to happen is an unquantifiable value. We have no real reliable way of knowing how long these stages might take. By definition we can’t really predict what’s going to happen, so surely any time estimates are off. It seems more likely to me, that improvement will be slow. It’s taken us hundreds of years to get to where we are now, and we haven’t even got to the first round of automated improvements yet.
None of these are concrete proofs against the occurrence of the singularity (if such thing could exist). But given that most of the real requirements are ridiculously far off, I don’t think we really need to talk about this for another 100 years really. I think technologists are just being hopeful and optimistic that some ultimate AI will come and do all their work for them. It’s not an unattractive proposition. But the inherent belief and circular arguments borders on religious reasoning.
Once a sceptic always a sceptic, I guess.
The other thing that has surprised me about all this, is the reliance on human intelligence. Most ideas seem to be based on simulating human brains at super-speeds as if that will suddenly step up and spawn something wonderful.
I really don’t think human intelligence is really worth replicating. I’d hope we can come up with something a little better than that (remember point 3 from above). Super fast humans will just be even quicker to jump to conclusions.