Toby Smith

Posts for April 2013

The best comment syntax

24th April 2013

I’m not going to talk about heirarchical comments here. I just wanted to write about a little nugget that I’ve found very useful, so that you can use it too!

//*///

This is valid in javascript, and I’m sure you could something similar in C and C++ too.

Why is this cool?

function GetSomeValue() {
//*///
var a = CHUNK OF CALCULATION
return a;
//*///
var b = CHUNK OF CALCULATION
return b;
}

With the deletion of the first character of the first “//*///” you can comment out the first return. (Imagine this was a substantial chunk of early outs).
The second instance will act as the closing comment for it, without having to add one. It’s useful if you know you’re going to be flicking bits of code on/off for a brief while.

This comment syntax also allows you to add inline comments afterwards:

function GetSomeValue() {
/*/// Inline comments are valid here!
var a = CHUNK OF CALCULATION
return a;
//*/// Also valid here!
var b = CHUNK OF CALCULATION
return b;
}

All in all they’re pretty nifty.
Personally I think I’d go for something like:
#{
#}

#{ being equivalent to /*///.
and
#} being equivalent to //*///

You could even extend this so that:
#{{
#}}
Would also be valid comments. And wouldn’t be closed by #}
This would mean that you could plop multi-line comments in various places, and not have to remove them later if you want to comment out chunks of code later!

function GetSomeValue() {
return 4;
#{{ TESTING ASSUMPTION
#{ This code does some calculation
about various things!
#}
var a = CHUNK OF CALCULATION
return a;
#} Might want to block off this code later
var b = CHUNK OF CALCULATION
return b;
#}}
}

I know it’s not the cleverest chunk of code in the world. Or actually does anything…or is really “code”. But, it might save you some time!

Argument against The Singularity

18th April 2013

I know there are a lot of people excited for the singularity. I would be too, if I thought it to be plausible. By all means prove me wrong and achieve it. I’d love to be wrong on this one.

For those that don’t know: The singularity is the point at which we can create AI that is smarter than us, to the extent that it can significantly improve itself faster than we can.
I highly recommend you go read about it. As that paltry explanation doesn’t really do the subject justice.

This is why I think it’s not going to happen:

  1. Humans create an AI
  2. That AI must be able to improve itself somehow
  3. Whilst increased speed is initially sufficient. Ultimately, the AI will need to improve it’s intelligence and thus: improve the amount it can improve itself
  4. All improvements are sufficient to overcome any, and all physical and or technical limitations
  5. This process can continue for sufficient iterations, in a reasonable time frame, for a seemingly infinite technical advance (as t -> infinity)

Let me break this down:

Humans create an AI

Sure. There are no definitions here. There are no problems here.

That AI must be able to improve itself somehow

We do have evolving algorithms and learning algorithms that get better at their jobs. So, again, technically there are no problems here.

Whilst increased speed is initially sufficient. Ultimately, the AI will need to improve it’s intelligence and thus: improve the amount it can improve itself

I think it’s quite conceivable that an algorithm will be able to make itself “faster”. Self optimisations, even designing custom faster hardware are reasonable plausible. However the AI will have to be able to “improve” itself beyond our initial tinkering. Otherwise it will forever be limited to the level of the human designers.
We do not have any significant AI that can spew out another AI (of greater intelligence). The issue here is that it has taken us so many hundreds of years, and we’re not even at the point when we can design something EQUAL to our brains. You’re assuming that the best thing that we can conceive of, could do better than us. And assuming that for (ostensibly) infinity iterations. Which seems unlikely. Even if you assume that the iterations will end at some point. All the steps before are not trivial.

All improvements are sufficient to overcome any, and all physical and or technical limitations

Each necessary improvement may require a tech shift. Like floppy disks to CDs to DVDs. Perhaps the AI needs new hardware and new tech. It needs to have that available, or be able to produce, design and build. There would have to be NO limitations in what it could feasibly DO. Which is not exactly a simple proposition. How can we provide the AI something we know nothing about. It doesn’t seem very “singularity”-y if we have to get involved every now and then. Life has a way of throwing curve balls at us. I see no reason why that shouldn’t happen to an AI too. The unknown here, is an unwieldy beast.

This process can continue for sufficient iterations, in a reasonable time frame, for a seemingly infinite technical advance (as t -> infinity)

Deep thought took 7 and a half million years to deduce the answer for “Life, the Universe and Everything”.
Even if we assume all the above is “possible”. The time it could take for all of this to happen is an unquantifiable value. We have no real reliable way of knowing how long these stages might take. By definition we can’t really predict what’s going to happen, so surely any time estimates are off. It seems more likely to me, that improvement will be slow. It’s taken us hundreds of years to get to where we are now, and we haven’t even got to the first round of automated improvements yet.

None of these are concrete proofs against the occurrence of the singularity (if such thing could exist). But given that most of the real requirements are ridiculously far off, I don’t think we really need to talk about this for another 100 years really. I think technologists are just being hopeful and optimistic that some ultimate AI will come and do all their work for them. It’s not an unattractive proposition. But the inherent belief and circular arguments borders on religious reasoning.
Once a sceptic always a sceptic, I guess.

The other thing that has surprised me about all this, is the reliance on human intelligence. Most ideas seem to be based on simulating human brains at super-speeds as if that will suddenly step up and spawn something wonderful.
I really don’t think human intelligence is really worth replicating. I’d hope we can come up with something a little better than that (remember point 3 from above). Super fast humans will just be even quicker to jump to conclusions.

http://www.thjsmith.com/feed">RSS Feed
  • Terms of use
  • Privacy Policy