The Death of the Separate Search Bar

Over the past few years, browser design has tended more and more to the minimal side.  This is in general good – the more the browser gets out of my way, and allows me to just look at the web, the better.  I am however increasingly frustrated by one change that seems to have taken hold as a “standard” feature of browsers.  That is, the unified search and URL bar.

The Good

I can see where browser makers were coming from when they made this change.  They were clearly thinking something along the lines of this:

  • Novice users don’t know what a URL is, what they want is to type something in, and get to it, quickly, they shouldn’t have to make a decision about which box to type their something into.
  • I can also see a very thin argument that the extra push of the tab key to get to the search box costs an advanced user a tiny amount of time.  Personally, I do not believe there’s any significant gain in this at all, as pushing tab after opening a new tab is easily embedded in a user’s muscle memory.  The actual time spend doing this is tiny tiny fractions of a second.

Beyond these two points though, I don’t really see any reason why the search and URL bar being unified is beneficial.

The Bad

There are unfortunately several draw backs to the unified bar:

  • First and foremost is the behaviour when I typo a URL.  What I end up with is a bunch of stuff loaded across my connection, and a fairly clunky UI experience while this is happening.  I don’t know a single browser that doesn’t get a little choppy in the UI department as a page loads, or mess about with the contents of the URL bar so that you can’t really edit it (as your changes are likely to be overwritten).  Loading a bunch of content when I typo something was not a good idea when my ISP’s DNS server did it for me 10 years ago, and it’s still not a good idea now, even if the page loads faster.  Instead, I simply want to see a “hey, that page doesn’t exist” error, and I want to see it fast.
  • Secondly, is the behaviour of the prediction of what I want to type in the URL/search bar.  With separate bars, I could reliably predict that if I typed “new” into the URL bar, I could press down, and then return to get to “news.bbc.co.uk”.  If I typed “new” into the search bar, I could predict that I could press down, and get some sane google predict results.  With a unified bar though, I can do neither of these, I must instead study the results to find the thing I actually want.  There seem to be 3 approaches to how to show the likely things you might want to load:
    1. Show google predict results first – this makes it impossible for me to quickly access results from my history.
    2. Show results in my history first – this makes it very hard to use google predict.
    3. Show a best guess at what I might want first – this makes the behaviour impossible for me to predict, and hence very slow to use.

Conclusions

The two serious negative impacts combined, I believe more than offset the gain from not having to press tab to reach the search bar.  The unified bar approach requires the user to think more, to read more, and to deal with unpredictability.  Worse, it causes significant delays when the user gets all that extra load wrong.

I can still see a strong argument for novice users to be given one bar that covers all functionality, but for even slightly advanced users, the unified search and URL bar is a terrible bit of UI design.

Developers and “bad” code

I’m quickly realising that “bad” code is a vast exaggeration.

Listening to many developers talking to me about code that they consider to be bad, it is becoming clear to me that what is meant by bad is not necessarily what the word implies. Instead, many developers seem to mean “I don’t understand this code”. Worse, it’s not that they don’t understand, but that they don’t want to understand. Here’s a couple of examples to explain:

A developer recently told me that he found that writing LINQ one liners was bad practice*. After quizzing him a little he cited several examples of LINQ that he did not instantly understand. After writing imperative versions of the same code, I at least came to the conclusion that the LINQ one liners were in fact more clear than the procedural code. I relayed this to the developer in question, and indeed, he admitted that he did not find any of the procedural versions easy to understand. In short, this developer baulked at the idea that code had been compressed into one line, and did not consider that this description might be easier to understand than the alternative. The sole reason being that he was not used to working with LINQ, and did not have his brain prepared for understanding LINQ statements.

Another developer recently told me that his colleague had written some bad code. He cited several reasons why he considered the code to be bad which all seemed quite reasonable – indeed, I was convinced that the code was poor. On consulting the other developer though, my opinion changed. The second developer explained that he too felt that the code was ugly from that point of view, but that if he had implemented it another way, it would have been more ugly from another point of view. His arguments were enough to sway me that his code was not bad, instead, he’d just thought about the problem from another angle. Of course, one could make an argument that none of the devs (including myself) had thought for long enough about this code. There probably was a solution that solved both sides of the argument neatly, but at some point, we have to write code and produce a working product. The key point here though is that the original complainant had not understood the problem fully, and had therefore declared the code to be “bad” prematurely.

The key to both these problems though was a lack of understanding. A developer’s job is to wrap their head around a problem fast, and to understand it from all angles, in these two cases I’m not convinced that’s happened. In future, I’m going to treat developers telling me about “bad” code with a large pinch of salt. Instead of assuming that the code is actually bad, I will assume that the developer in question has simply not understood the reasoning behind the code yet.

A second piece of this puzzle is the hunt for perfection in developers. Few developers will ever tell you that they consider any piece of code to be good. This includes code that they themselves have written. For any given piece of their code, a developer will typically list several things that can be improved, often in conflicting ways. This contributes heavily to the lack of understanding. A new developer on this code will not only have to understand the original developer’s reason for designing the code in a certain way, but they’ll have to understand where and why they made ugly implementation decisions. It may simply be that they haven’t had time yet to clean up the problems, or that there’s a trade off involved. The fact remains though that this introduces an extra variable to the lack of understanding.

Ultimately, what this boils down to is that no developer is happy unless they have their head well and truly wrapped around a problem. When first starting to understand another developer’s code this is not true. The result is that all too often code is declared to be “ugly”, “bad”, “messy” or any number of other derogatory terms. Instead, typically what is meant is “I don’t understand why this developer did this”.

My gut feeling is that this actually lessens the impact of terms like “terrible” code. These terms should be reserved for code that is actually erroneous, or is inefficient to the point of being in a complexity class it clearly does not need to be in. So please developers, stop using the term to brand all code that you read. Instead, make constructive criticism of the code, and try to understand why it was done that way in the first place.

* LINQ is a functional programming inspired API that allows developers to write clear, concise “queries” to extract data instead of complex loops in loops.

The iPhone gaming Fallacy

Recently I’ve had a lot of discussions about how good the iPhone/iPod touch is for playing games on. Most people contest that, unlike a traditional handheld console, the iPhone is limited by it’s control mechanism. That is to say, there are no physical buttons. I don’t agree, I see the iPhone’s control mechanism as something that makes it different, not something that makes it inferior. The reason for this is very simple, and can be seen by splitting up games by genre. I’m going to look at several game genres, and which platforms play them well.

First Person Shooters

FPS games need the ability to turn fast, and perform lots of interesting actions. In reality, the most important requirement here is being able to spin on the spot. This is something that mice are *incredibly* good at. They provide the ability to move extremely precisely to any point, and at any speed you require. For that reason, along with there being over a hundred buttons on a keyboard, PCs have to take the crown in this department. But I was talking about hand helds. Which of those takes the crown here? An analog stick doesn’t give you the fast precision of a mouse, a touch screen can simply let you tap where you want to turn to. I’ve not though seen any fps games implemented this way yet, they all try to simulate an analogue stick for some silly reason. On the other hand, traditional controls have plenty of spare buttons to use for fire, jump etc. A touch screen has no such luxury. For this reason, I’m going to give FPSes to the traditional console.

  1. Desktop PCs
  2. Traditional handhelds
  3. Touchscreen

Racing Simulations

Racing simulations require a smooth, analogue input that mirrors a steering wheel well. There really is no contest here, traditional handhelds have the perfect control mechanism! PCs similarly gain the perfect control mechanism, as long as you attach a steering wheel.

  1. Traditional handhelds
  2. Desktop PCs
  3. Touchscreens

At this point, things aren’t looking too good for the poor old iPhone, but lets carry on with some more game genres

Role-playing games

Controlling a character in a role-playing game for example is done quickly an easily with an analogue stick, though often selecting enemies to fight can be a chore. With a touch screen, we can tap where our character should go, and we can tap on enemies and actions to have a punch up. This one’s close, but it’s got to go to the Touchscreen. A side note though – the PC, with it’s combination of keyboard and mouse can do this better.

  1. Desktop PCs
  2. Touchscreens
  3. Traditional Handhelds

Strategy

Strategy games require you to be able to pick units quickly, and give orders out fast. That means being able to select something on the play area near instantly, and then direct the something somewhere else on the play area similarly quickly. The touchscreen is a clear winner here, you can simply tap units, and drag/retap them where they must go. With a traditional console, we must sit pushing buttons repeatedly to select the right area of screen. With a PC, we at least have a mouse with which we can quickly point to the relevant units and move them.

  1. Touchscreens
  2. Desktop PCs
  3. Traditional Handhelds

New Genres

The iPhone seems to have spawned a whole new genre of game – the line drawing game. Be it Flight Control, or 33rd Division, all of these games involve lots of things moving about the screen, and you drawing out lines to control where they go.

Conclusions

That’s by no means an exhaustive list of game genres. What we’ve hopefully seen though, is that the iPhone is not an awful platform for gaming. It doesn’t do so well on some game genres that traditional handhelds excel at, on the other hand, it does extremely well for other generes, and has even invented whole new genres specifically for it’s input mechanism.

Obj-C’s type system is too strong

That’s rather a surprising title, isn’t it! Objective-C has one of the weakest type systems of any language. What I’m going to demonstrate though, is that with the addition of Objective-C’s “block” construct (really closures with a special name), Objective-C’s type system is now not only too weak for my tastes, but too strong to do useful things!

In short, Objective-C’s type system is broken, not only does it allow lots of incorrect programs that many type systems disallow, but it also disallows a fair number of correct programs that it shouldn’t.

Blocks

Objective-C gained a really useful feature lately – the closure. We can define a closure like so:

// Define a closure that multiplies it's argument by a variable 'a'.
- (void)myClosureDefiningMethod
{
    int a = 5;
    int (^timesA)(int x) = ^(int x) { return x * a; };
}

The syntax isn’t the prettiest in the world, but it mirrors C function pointer syntax, so it’s not all bad.

Higher Order Programming

The ability to create functions on the fly like this is really powerful, so much so, that whole languages (like Haskell) base their programming style on doing this kind of thing lots. Let’s then, turn to Haskell for inspiration about what kinds of things we might want to do with this.

The standard Haskell library (the Prelude) defines some really stunningly simple things using this technique, and the lovely thing is that they turn out to be quite useful. Lets look at const for example:

const :: a -> b -> a
const x y = x

So, we pass const an argument, and what we get back is a new function that ignores it’s argument, and returns our original one. It’s dead simple, but mega useful.

Lets try to define the same function with Obj-C closures then:

(a (^)(b ignore))constantly(a ret)
{
    return ^(b ignore){ return ret; };
}

This looks great! We have our const function, but wait, I’ve cheated. I’ve not defined the return type of the closure, or the type of constantly’s argument properly. What I want to be able to say is, in typical C weak typing fashion, “any type at all”. This, although it wouldn’t specify the type very strongly, would at least allow me to use the function. Unfortunately, neither C, nor Obj-C has such a type. The closest you can reasonably get is void *, and that won’t admit a whole swathe of useful types like BOOL, int, float etc.

The App Store Approval Process

I’ve recently been doing a chunk of iPhone development, and had a chance to experience the App Store approval process for myself. And I’m going to make one thing very very clear: Either things have got a lot better, or all the press hype is exactly that – hype.

Submission One

On my first submission of SimpleGPS it took 6 days to get to “In review” status, and was promptly rejected about 2 days later. Apple had taken a fairly reasonable exception to part of my marketing material. Specifically, the claim that SimpleGPS could find your location without an internet connection, as this is only possible on iPhone 3G or 3Gses at present.

Submission Two

I fixed my marketing material to deal with Apple’s concern, and went ahead with my second submission. I got an email back about rejection after a similar length of time, this time noting that it did not work in aeroplane mode. I queried this rejection, on the grounds that my marketing material clearly stated you needed a good GPS lock, while they clearly didn’t have one, as the GPS in their unit was turned off. Within a day, I had a response from them, acknowledging this, and restarting the review of my application. 3 days later, SimpleGPS was in the store!

Submission Three

After some feedback from my users, I had a simple update ready, and submitted it shortly before christmas. This approval was the slowest I had experienced, taking a whole 12 days to get through the process. To be fair though, in the middle of this, they had a christmas break!

Submission Four

It’s unknown whether apple spent this time improving their systems, or merely catching up on the backlog, but something has improved. I had submitted a second minor update to SimpleGPS on the 29th of December, and expected to wait a week or so before checking back. After another developer noted that he had just had his app accepted in 1 day, I checked back, and discovered that my update was also dealt with.

Conclusion

Apple’s approval process doesn’t seem to be needlessly slow, and the speed of response appears to have improved drastically post-christmas. Secondly, the feedback I’ve got from apple has been fair, and reasonable. Thirdly, Apple have responded to questions I’ve had quickly, and even reversed decisions after discussion with me. The media hype about how awful this process is seems to me to be bull.

40 year old bug fixed

I have it on good authority that Apple have fixed one of the longest running bugs in Mac OS, and other computer OSes in Snow Leopard. This is the mother of all long running bugs. It’s been about for 40 odd years.

Which bug am I talking about? For all this time, all these OSes have been reporting a 1,024 bytes to be 1kB, 1,048,576 bytes to be 1MB, 1,073,741,824 bytes to be 1GB etc. an obvious inaccuracy, the SI prefix (pre-existing since the 50s) definitions very specifically say that 1,000 anythings is 1 kilo anything; 1,000,000 anythings is 1 mega anything; and 1,000,000,000 anythings is 1 giga anything.

But where did such an obvious bug come from, and why did it persist for so long? Way back when, when binary computers were first being created, shifting a binary number right by 10 places was a very fast operation. That meant that division by 1024 was extremely cheap. In the mean time, division was one of the most expensive operations on a CPU, if it was available at all. Often, one would need to simulate it with a bunch of other mathematical operations to home in on the correct value. The result was that in those days, we were prepared to accept being off by 24 measly bytes, in exchange for a large chunk of efficiency.

Of course, over time, it became common knowledge that 1024 bytes was 1 kilo byte. The fact that this was incorrect was by the by. And so it became common knowledge that obviously a mega byte would be 1,048,576 bytes, etc etc. The issue these days of course is that you go out and you buy a 1TB disk, the hard drive maker has accurately reported the disk’s size as being 1,000,000,000,000 bytes, your OS meanwhile would report it’s size as only 909GB because it divided by 1,099,511,627,776 instead of 1,000,000,000,000.

So what’s changed? Well, yay! Snow Leopard now correctly divides by 1000, instead of 1024!

As an afterthought by the way, there are names for the more convenient to divide by prefixes. They’re kibi, mebi, gibi, tebi, pebi, exbi, zebi and yobi, which are shortened to Ki, Mi, Gi, Ti, Pi, Ei, Zi and Yi.

Exponentiation Types

A pair colleagues of mine and I have been staring at an interesting riddle, which I’m guessing exists in the literature somewhere. He pointed out that we have sum types where a+b is the type containing all the values in a, and all the values in b, we have product types where a*b is the type containing all the values which contain an a, and a b. What we don’t have though is exponentiation types. The riddle then – what is the type ab?

Bart realised that this type is b -> a. The type contains all functions that map bs onto as. This has some rather nice mathematical properties. We know from our early maths a couple of rules about exponents:

ab * ac = ab+c

This gives us a rather nice isomorphism: (b -> a, c -> a) is equivalent to (b + c) -> a. That is, if we have one function that produces an a from bs, another that gives us an a from cs, we can write a function that gives us as, given either a b or a c, and vice versa.

Secondly, and perhaps even nicer
(ab)c = ab*c

This gives us a different isomorphism: c -> b -> a is equivalent to (b,c) -> a. Woohoo, we have currying!

This seems very close to the curry-howard isomorphism, but not quite there. Does anyone know who’s discovered this already?

LED Lighting

We just replaced our entire hallway lighting with LEDs, in total, that’s 8 halogen bulbs gone. I have to say, I’m fairly impressed for a first generation technology, it’s not perfect, but it does work well.

The good, the bad and the ugly

The new bulbs aren’t as bright as the old halogens, having said that, we bought some of the cheapest LED lights there are, little 1W babies, it’s possible to get ones that are much brighter.

The new bulbs also give off a slightly cooler light than the old, but not as cold as I expected, they’re entirely acceptable in the hallway.

Some maths

The new bulbs cost €5 each, and have a life time of 50,000 hours, as I said, they’re 1W bulbs, so they’re gonna use about 50kWh in their life.

The old bulbs also cost about €5 each, and have a lifetime of 750 hours, they were 25W bulbs. Over the life time of the LED lights, I would have to replace them 67 times, and the would use 1,250kWh.

Electricity costs about €0.15 per kWh at the moment, and I can only imagine that will go up. Lets make a conservative estimate that over the next 50 years, the average price will be €0.20 per kWh.

That puts the price of an LED light over the next 50 years at €15 – €5 for the bulb, and €10 for the electricity. The halogen bulbs meanwhile cost €585 – €335 for the bulbs and €250 for electricity.

I knew that LEDs cost less over time, but honestly, I had no idea it was that much that you saved.

Collecting Non-Memory Resources

A Problem

Let us consider a small problem. We would like to manage resources using a Haskell program, that are not just memory. For the sake of argument we will consider GPU resources. This can be reasonably straight forwardly done by using the IO monad to essentially write an imperative program that manages the resources. But doesn’t this defeat the point of functional programming? We’re losing so many benefits that we normally get, we no longer get to describe only the result of our program, instead we have to describe how to get to it too. Not only that, but we’ve lost our wonderful garbage collection system that allows us to easily avoid all of those nasty segfaults we see in non-managed languages. So, the problem today is, how do we extend the Haskell garbage collector (preferably without playing with the runtime or compiler) to be able to manage all these resources.

An attempt

Let’s consider only one small subset of GPU resources – a shader. What we would like in our Haskell program is a pure value that represents the shader, which we can call on at a later date. We’d like a function that takes our shader code, and produces this pure value, and we’d like the resources on the GPU to be collected when the value is no longer in scope.

compile :: String -> String -> Shader
compile vertexShdrSrc fragShdrSrc = s
  where
    s = doCompile s vertexShdrSrc fragShdrSrc

{-# NOINLINE doCompile #-}
doCompile :: Shader -> String -> String -> Shader
doCompile  s vertexShdrSrc fragShdrSrc =
  unsafePerformIO $ do
    {- Set up our fancy pants shader stuff here -}
    addFinalizer s {- Remove resources from the GPU here -}

What we hope will happen is that we return our shader – s, with a finalizer attached to it. When the garbage collector collects s, it will also collect the resources off the GPU. This all looks rather good, so lets try using it:

myShader :: Shader
myShader =
  compile "some vertex shader source"
          "some fragment shader source"

The result of evaluating myShader is a constant use of s, the definition of this constant is looked up, and replaces it, so myShader is now defined as the right hand side of s. Unfortunately, there’s now nothing that points at s itself, so it’s garbage collected, and all our resources removed from the graphics card.

Conclusion

We’ve tried to find a way of getting automated collection of non-memory resources, but ultimately, not quite got there. I don’t see a way forward from this point, and would love to hear other people’s input on how this sort of management can be done

Cabal’s default install location

Cabal’s default install location is somewhat controversial – Many people seem to like the default of user installs, while many others would prefer that it matched all other software and installed globally. The assumption amongst the community at the moment is that “most” want user installs. I wanted to find out if they’re right. If you’re a Haskeller, please vote, it’ll take a lot less time than voting for a new logo 🙂