Toby Smith

The best comment syntax

24th April 2013

I’m not going to talk about heirarchical comments here. I just wanted to write about a little nugget that I’ve found very useful, so that you can use it too!

//*///

This is valid in javascript, and I’m sure you could something similar in C and C++ too.

Why is this cool?

function GetSomeValue() {
//*///
var a = CHUNK OF CALCULATION
return a;
//*///
var b = CHUNK OF CALCULATION
return b;
}

With the deletion of the first character of the first “//*///” you can comment out the first return. (Imagine this was a substantial chunk of early outs).
The second instance will act as the closing comment for it, without having to add one. It’s useful if you know you’re going to be flicking bits of code on/off for a brief while.

This comment syntax also allows you to add inline comments afterwards:

function GetSomeValue() {
/*/// Inline comments are valid here!
var a = CHUNK OF CALCULATION
return a;
//*/// Also valid here!
var b = CHUNK OF CALCULATION
return b;
}

All in all they’re pretty nifty.
Personally I think I’d go for something like:
#{
#}

#{ being equivalent to /*///.
and
#} being equivalent to //*///

You could even extend this so that:
#{{
#}}
Would also be valid comments. And wouldn’t be closed by #}
This would mean that you could plop multi-line comments in various places, and not have to remove them later if you want to comment out chunks of code later!

function GetSomeValue() {
return 4;
#{{ TESTING ASSUMPTION
#{ This code does some calculation
about various things!
#}
var a = CHUNK OF CALCULATION
return a;
#} Might want to block off this code later
var b = CHUNK OF CALCULATION
return b;
#}}
}

I know it’s not the cleverest chunk of code in the world. Or actually does anything…or is really “code”. But, it might save you some time!

Argument against The Singularity

18th April 2013

I know there are a lot of people excited for the singularity. I would be too, if I thought it to be plausible. By all means prove me wrong and achieve it. I’d love to be wrong on this one.

For those that don’t know: The singularity is the point at which we can create AI that is smarter than us, to the extent that it can significantly improve itself faster than we can.
I highly recommend you go read about it. As that paltry explanation doesn’t really do the subject justice.

This is why I think it’s not going to happen:

  1. Humans create an AI
  2. That AI must be able to improve itself somehow
  3. Whilst increased speed is initially sufficient. Ultimately, the AI will need to improve it’s intelligence and thus: improve the amount it can improve itself
  4. All improvements are sufficient to overcome any, and all physical and or technical limitations
  5. This process can continue for sufficient iterations, in a reasonable time frame, for a seemingly infinite technical advance (as t -> infinity)

Let me break this down:

Humans create an AI

Sure. There are no definitions here. There are no problems here.

That AI must be able to improve itself somehow

We do have evolving algorithms and learning algorithms that get better at their jobs. So, again, technically there are no problems here.

Whilst increased speed is initially sufficient. Ultimately, the AI will need to improve it’s intelligence and thus: improve the amount it can improve itself

I think it’s quite conceivable that an algorithm will be able to make itself “faster”. Self optimisations, even designing custom faster hardware are reasonable plausible. However the AI will have to be able to “improve” itself beyond our initial tinkering. Otherwise it will forever be limited to the level of the human designers.
We do not have any significant AI that can spew out another AI (of greater intelligence). The issue here is that it has taken us so many hundreds of years, and we’re not even at the point when we can design something EQUAL to our brains. You’re assuming that the best thing that we can conceive of, could do better than us. And assuming that for (ostensibly) infinity iterations. Which seems unlikely. Even if you assume that the iterations will end at some point. All the steps before are not trivial.

All improvements are sufficient to overcome any, and all physical and or technical limitations

Each necessary improvement may require a tech shift. Like floppy disks to CDs to DVDs. Perhaps the AI needs new hardware and new tech. It needs to have that available, or be able to produce, design and build. There would have to be NO limitations in what it could feasibly DO. Which is not exactly a simple proposition. How can we provide the AI something we know nothing about. It doesn’t seem very “singularity”-y if we have to get involved every now and then. Life has a way of throwing curve balls at us. I see no reason why that shouldn’t happen to an AI too. The unknown here, is an unwieldy beast.

This process can continue for sufficient iterations, in a reasonable time frame, for a seemingly infinite technical advance (as t -> infinity)

Deep thought took 7 and a half million years to deduce the answer for “Life, the Universe and Everything”.
Even if we assume all the above is “possible”. The time it could take for all of this to happen is an unquantifiable value. We have no real reliable way of knowing how long these stages might take. By definition we can’t really predict what’s going to happen, so surely any time estimates are off. It seems more likely to me, that improvement will be slow. It’s taken us hundreds of years to get to where we are now, and we haven’t even got to the first round of automated improvements yet.

None of these are concrete proofs against the occurrence of the singularity (if such thing could exist). But given that most of the real requirements are ridiculously far off, I don’t think we really need to talk about this for another 100 years really. I think technologists are just being hopeful and optimistic that some ultimate AI will come and do all their work for them. It’s not an unattractive proposition. But the inherent belief and circular arguments borders on religious reasoning.
Once a sceptic always a sceptic, I guess.

The other thing that has surprised me about all this, is the reliance on human intelligence. Most ideas seem to be based on simulating human brains at super-speeds as if that will suddenly step up and spawn something wonderful.
I really don’t think human intelligence is really worth replicating. I’d hope we can come up with something a little better than that (remember point 3 from above). Super fast humans will just be even quicker to jump to conclusions.

Javascript Performance

20th March 2013

I’ve become quite enamored with using jsperf to benchmark the efficiency of code.

Two of the more interesting ones I’ve done (so far) are:

A comparison of various array concatination (appending one to the end of another) techniques

I look at: arr.concat, arr.push.apply(a,b), jquery’s merge and a couple of looped approaches.
The winner by a landslide:

while (b.length) {
  a.push(b.pop());
}

Even if you have to go through a few layers to access it (on the Chrome and Firefox versions I tested). Awesome!

The other was more frustrating.
A top-n sorting algorithm
Say for instance you want the top-n from an array. You’d think it might be more efficient to not have to sort the whole array and just maintaining the top few.
You’d like to think that wouldn’t you.

If you’re not using Google’s V8 engine, then you are! They seem to have uber-charged their sorting. Which is awesome. If somewhat irritating given the time wasted on custom sorting methods. 🙁
I tested it with some custom Insertion Sort and Binary Sort methods. I’m not sure a divide and conquer approach would be relevant here, but it might be worth a look.
Also worth considering the size of your n compared with the size of your arrays. I wonder if things got a bit more hairy (n = 5 ; array.length = 10,000,000) whether things would change.

Volume of a Martini Glass

18th January 2011

Question, how far up the side of a martini glass constitutes half the volume?

We can model the glass as a cone.
Volume of a cone:
v = \frac{\pi h r^2}{3}

If we add a term “\alpha” to denote the change.
The height affects the radius (r) linearly (similar triangles).
So h \rightarrow \alpha h and r  \rightarrow \alpha r.

To find effect of halving the volume on the height:
\frac{1}{2}\cdot \frac{\pi h r^2}{3} = \frac{\pi h r^2 \alpha^3}{3}

Thus:
\frac{1}{2} = \alpha^3
so:
\alpha = 2^{-\frac{1}{3}}\approx0.79

So (if my maths is correct) you need to fill the glass to approximately 0.79 of the way up the side of the glass for half the volume.

..and who said maths wasn’t useful.

Cylinder between two points (OpenGL C++)

4th December 2010

Whilst working on one of my projects this year for uni I was looking for some code to draw a cylinder between two points (using OpenGL). There were a couple of solutions out there, but they weren’t that great.

I couldn’t find one that worked reliably and simply (without lots of different if statements trying to catch different cases). Anyway, after a bit of thought I knocked this one out. It’s actually a lot simpler than you think…which is probably why people haven’t bothered to post it.

Anyway…enjoy some pseudo code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Vector3D a, b; (the two points you want to draw between)
 
// This is the default direction for the cylinders to face in OpenGL
Vector3D z = Vector3D(0,0,1);         
// Get diff between two points you want cylinder along
Vector3D p = (a - b);                               
// Get CROSS product (the axis of rotation)
Vector3D t = CROSS_PRODUCT (z , p); 
 
// Get angle. LENGTH is magnitude of the vector
double angle = 180 / PI * acos ((DOT_PRODUCT(z, p) / p.LENGTH());
 
glTranslated(b.x,b.y,b.z);
glRotated(angle,t.x,t.y,t.z);
 
gluQuadricOrientation(YourQuadric,GLU_OUTSIDE);
gluCylinder(YourQuadric, RADIUS, RADIUS, p.LENGTH(), SEGS1, SEGS2);

Hope that helps someone out there.

Measuring Stress of Material

15th November 2010

I swear, if God hadn’t invented cakes, someone would be dead by now.
….and I’m all out of cakes.

http://www.thjsmith.com/feed">RSS Feed
  • Terms of use
  • Privacy Policy