Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Symbian was not quite as great as you describe it there. Asynchronous/message-passing-based alright but with a non-scaling message queue model (servers couldn't horizontally scale because there was only one request queue - you always had multiple clients hammering one singlethreaded server). The ability to scale I/O on Symbian (via shared mem/paging) came too late to benefit Nokia (who was extremely tardy at adopting newer Symbian releases ...), and no multi-core ARM-based phone ever ran Symbian. It only got multicore support in v9 (the last) anyway. Did I mention it didn't have an ARM64 port either (press releases notwithstanding)?

"Symbian C++" used to be a great search term ... for clubkiness horror stories. Or at least warnings that "don't expect this to be your pal's C++". Nevermind the kernel actually used a more crude form (a C++ "standard library" for kernel code is ... challenging). Definitely not "easy to read".

Symbian (and Nokia) recognised this to a degree; hence the "posix environment" or the Qt purchase. Symbian 9 was a great step towards scaling it. It only got into a phone ... with the last Symbian phone ever made. Bit of a pity - it could never really show what might have been.

They were steamrollered. Things just happened too fast for non-unix-kernel based systems to take the then-imminent multicore and 64bit ARM. Nevermind other misses like keyboard vs. touchscreen (Symbian UIQ was rather nice to use but nokia's S60 was not). Or in Nokia's case, not going all-in on Maemo/Meego.

(yes, been there. Only at its tail end, and only stayed as long as I did because in 2009, the job market wasn't great. Nonetheless, I remember a few things - great, sad and stupid. Symbian had all of that and then some. And definitely, Nokia also had a few "Varros").



After rereading my posting below it's a lot of long-winded blah. My short answer is that IMO a huge portion of the technical problems were recognised and attempts made to solve them but it was impossible because the business failed to adjust its own strategy to something that was achievable.

[Here followeth the guff]

IMO the asyncronous API on Symbian was elegant and simple whereas on Linux it's a confusing messy disaster and to make async keywords work in a modern browser one has to use libraries like libuv to fake bits of it or insulate you from the many 1/2 useful event or polling APIs. I thought the Symbian file server had threads but I admit I'm not a great expert on any of that. I do think that you could have fixed those problems and every app would have continued working but better whereas in Linux you can do whatever you want and most software still won't benefit from it.

Symbian C++ was just C++ with an active object model to handle asynchronous events and the insistence that a program must not crash just because it runs out of memory (which made 2-phase constructors a thing and caused the reimplementation of the new operator to return NULL if memory allocation failed rather than raise an exception). Those 2 constraints alone made it a nightmare. One has to ask oneself why it was so critical to not fail when memory allocation fails - how many programs can realistically continue to do useful things when there's no memory for them?

The Active Object thing which turned your program into one big series of event handlers was a misery IMO. The whole platform was crying out for co-routines.

I never looked at Apples approach to memory allocation failures but I think they probably just decided to add enough memory to the phone and boo-hoo if something went wrong. They saved state and let you restore so it looked almost as if the thing hadn't crashed.

All the other things you mention are true but there are root causes for it and one of them is the attempt to develop too much software for too many hardware types leading to too many bugs and then architecting the build system such that it could not scale.

I was on the team that rewrote the build system (a huge mistake BTW because it took too long). The Nokia build produced 85GB of output. After a lot of effort we got it down to 12h with a huge build cluster of 100s of machines and on a good day where nothing broke. A large part of that build was the same code getting recompiled with different options for alternate phone models. Most of the build was only able to build with RVCT. Compiler updates were very risky because all compilers have bugs and on a large codebase they get exposed somewhere or other. Compare to android where a 16core machine can build it in the order of an hour or even less perhaps - and most of that build is compiling Java to bytecode so it never has to be done more than once.

I have forgotten a lot but IIRC we really crushed the compilation and linking phase but packaging was done in a stupid way - into huge packages that are nothing like Linux ones and a couple of the largest ones of these tied up the build on one cluster node for a long time

In fact the whole linux packaging system itself offered the possibility of NOT recompiling the whole system repeatedly, but I worked on OBS after this and total rebuilds were more common than one might wish to believe. I still think it would have been better than the lunacy we were engaged in.

The clusters were still overburdened because there wasn't just one build to run on them - older versions of the OS still had to be rebuilt for updates for example. Also adding language support required a rebuild - IMO if one wants to consider design mistakes this and the use of the DLL format rather than .SO were the biggest unrecognised balls-ups.

DLLs are very unpleasant for compatibility because if you change the number of methods in a class it can destroy compatibility. the schemes for getting around this were horrible and just seemed to be unreliable (freezing).

The rest of the technical problems - almost all of them - were very adversely affected by this constipated system and it existed because of the decisions:

  1) To make a lot of models with different hardware in them and more/less RAM.
  2) To reduce component count - saving 50c here and there which had worked extremely well for Nokia in the past before those StrongARM and similar chips came out that created a big jump in ARM performance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: