One thing I think is missing is an understanding of why there is such a top-down push for timelines: because saying "we aren't sure when this feature will be delivered" makes sales people look like they don't know what they are talking about. Which.... well.
They would much rather confidently repeat a date that is totally unfounded rubbish which will have to be rolled back later, because then they can blame the engineering team for not delivering to their estimate.
I'm a dev, not a salesperson, but let's be realistic. A company tells you "yeah we're interested in signing at $1M/yr, but we really need this feature, when will you have it by?", to which saying "eh we don't know - it'll be done when it's done" will lead to the company saying "ok well reach out when you have it, we can talk again then" (or just "eh ok then not a good fit sorry bye"), and in the meantime they'll go shopping around and may end up signing with someone else.
Having a promised date lets you keep the opportunity going and in some cases can even let you sign them there and then - you sign them under the condition that feature X will be in the app by date Y. That's waaaay better for business, even if it's tougher for engineers.
“Sign up and pay at least part of it now and we’ll prioritize the feature”.
I’ve seen enough instances of work being done for a specific customer that doesn’t then result in the customer signing up (or - once they see they can postpone signing the big contract by continuing to ask for “just one more crucial feature”, they continue to do so) to ever fall for this again.
Why do that if your competitor already has it? I'd just go talk to the competitor instead. If you aren't able to ballpark when the feature will be done, why should I trust you will once I pay part of the price?
Because you have other benefits, so we'd really like to switch over to you, but we can't unless you support this dealbreaker feature that your competitor we're currently using has.
Just to consider the opposite viewpoint, I sometimes wonder if it's not better that they do churn in that case.
Assuming the sales team is doing their job properly, there are other prospects who may not need that feature, and not ramming the feature in under time constraints will lead to a much better product.
Eventually, their feature will be built, and it will have taken the time that it needed, so they'll probably churn back anyway, because the product from the vendor they did get to ram their feature in is probably not very good.
I understand the intuition, but it's a misunderstanding of how software sales operates. There's no tradeoff between prospects who need new features and prospects who don't, because salespeople love that second category and you'll have no problem hiring as many as you need to handle all of them.
Unless its the first time they are hearing about it, when a customer asks about a feature, sales should've done their homework and checked with the team doing the work to get a rough estimate instead of pulling a number out of their behinds.
In Australia, an SDE + overhead costs say $1500 / work day, so 4 engineers for a month is about $100k. The money has to be allocated from budgets and planned for etc. Dev effort affects the financial viability and competitiveness of projects.
I feel like many employees have a kind of blind spot around this? Like for most other situations, money is a thing to be thought about and carefully accounted for, BUT in the specific case where it's their own days of effort, those don't feel like money.
Also, the software itself presumably has some impact or outcome and quite often dates can matter for that. Especially if there are external commitments.
The only approach that genuinely works for software development is to treat it as a "bet". There are never any guarantees in software development.
1. Think about what product/system you want built.
2. Think about how much you're willing to invest to get it (time and money).
3. Cap your time and money spend based on (2).
4. Let the team start building and demo progress regularly to get a sense of whether they'll actually be able to deliver a good enough version of (1) within time/budget.
If it's not going well, kill the project (there needs to be some provision in the contract/agreement/etc. for this). If it's going well, keep it going.
The exact same way you'd treat any other investment decision.
In the real world, if you've got $100k, you could choose to invest all of it into project A, or all into project B, or perhaps start both and kill whichever one isn't looking promising.
You'd need to weigh that against the potential returns you'd get from investing all or part of that money into equities, bonds, or just keeping it in cash.
You mean… by making a forward-looking estimates of cost, time-to-value, return? (even if it's implicit, not documented, vibes-based?).
When devs refuse to estimate, it just pushes the estimating up the org chart. Execs still have to commit resources and do sequencing. They’ll just do it with less information.
Doesn't this ignore the glaring difference between a plumbing task and a software task? That is, level of uncertainty and specification. I'm sure there are some, but I can't think of any ambiguous plumbing requirements on the level of what is typical from the median software shop.
Sorry, I edited the plumbing refence out of my comment because I saw a sibling post that made a similar point.
I agree there is less uncertainty in plumbing - but not none. My brother runs a plumbing company and they do lose money on jobs sometimes, even with considerable margin. Also when I've needed to get n quotes, the variation was usually considerable.
I think one big situational difference is that my brother is to some extent "on the hook" for quotes (variations / exclusions / assumptions aside) and the consequences are fairly direct.
Whereas as an employee giving an estimate to another department, hey you do your best but there are realistically zero consequences for being wrong. Like maybe there is some reputational cost? But either me or that manager is likely to be gone in a few years, and anyway, it's all the company's money...
I bet if SWEs were seeing anywhere near that 1.5k per day they’d be more inclined to pay attention.
But when you get paid less than half that it doesn’t feel like a problem to worry about. At 300/day of take-home pay, one more day here or there really isn’t going to make a difference.
But it's the reality of engineering. If reality is unacceptable, that's not reality's problem.
But the problem is, the sales world has its own reality. The reality there is that "we don't know when" really is unacceptable, and "unacceptable" takes the form of lost sales and lost money.
So we have these two realities that do not fit well together. How do we make them fit? In almost every company I've been in, the answer is, badly.
The only way estimates can be real is if the company has done enough things that are like the work in question. Then you can make realistic (rough) estimates of unknown work. But even then, if you assign work that we know how to do to a team that doesn't know how to do it, your estimates are bogus.
I don't know that it's the reality of engineering. (Edit: in fact there are some comments for this post providing counterexamples, an interesting one is the hardware world).
It's what we software engineers like to tell ourselves because it cuts us slack and shifts the blame to others for budget and time overruns. But maybe it's also our fault and we can do better?
And the usual argument of "it's not like physical engineering, software is about always building something new" because that's only true for a minority of projects. Most projects that fail or overrun their limits are pretty vanilla, minor variations of existing stuff. Sometimes just deploying a packaged software with minor tweaks for your company (and you must know this often tends to fail or overrun deadlines, amazingly).
I know another "engineering" area where overruns are unacceptable to me and I don't cut people slack (in the sense it's me who complains): home building/renovation contractors. I know I'm infuriated whenever they pull deadlines out of their asses, and then never meet them for no clear reason. I know I'm upset when they stumble over the slightest setbacks, and they always fail to plan for them (e.g. "we didn't expect this pipe to run through here", even though they've done countless renovations... everything is always a surprise to them). I know I'm infuriated when they adopt the attitude of "it'll be done when it's done" (though usually they simply lie about upfront deadlines/budgets).
Maybe that's how others see us from outside software engineering. We always blame others, we never give realistic deadlines, we always act surprised with setbacks.
Part of it is absolutely our fault; part of it is the industry.
In the electronics world, when you need <common functionality>, you can find an off-the-shelf part that fits your requirements, fit that part in and it'll work. When you need logic in a hardware device, nobody's rolling their own CPU from discrete parts - they just take the cheapest microcontroller fitting the requirements.
In the software world we don't seem to have this concept of building blocks for common functionality even decades into the industry. Most software projects are some flavor of CRUD app with custom logic operating on the CRUDed objects. You'd think all the complexity would be in the custom logic, but actually it's at best 50-50 and at worst most of the complexity is in the whole CRUD bullshit and not what happens to the object once it's CRUD'ed.
How come in 2026 there's still no way to have an off-the-shelf component I can buy to do "I have a table of objects on the server, and I want to expose this as a UI to the client"? Why do I still see people writing this by hand in React/$JS-framework-of-the-day and messing around with things like OpenAPI and/or writing serializers/deserializers by hand? I swear most of the work I see in the web development space is the minutia between client/server communication.
I think there are several reasons:
* overengineering/resume-driven-development: even if there was to be an off-the-shelf component to do the task, people would probably avoid it and prefer to bullshit around reimplementing a (worse) solution. That's already the case where people are using React/SPAs/etc for views that do no need any interactivity and could just be an HTML form.
* political choices influencing tech selection: more often than not some tech or service provider is selected based on political reasons and not technical, and then the engineering challenge becomes as to how to shoehorn this ill-fitting part into our solution.
* aversion to paid software: hardware engineers are expected and allowed to select parts that cost money. I've never been on a software project where we had an explicit budget for licensing software. Reaching for paid software became the least resort option I'd have to fight for and burn political points, while spending 10x the cost building a (shittier) replica in-house was considered fine.
Due to the last point there's also little incentive for software providers to build and sell such components, so the market is quite small and not competitive, with the (very few) competitors having their own dealbreakers. Firebase will give you the instant database and UI, but then you're forever tied to paying them rent. You can't just license the server component and install it in-house like you can buy an FPGA.
If you hired someone to do some work on your house, and they refused to give an estimate, would you be happy?
If you had a deadline - say thanksgiving or something - and you asked “will the work be done by then” and the answer was “I’m not going to tell you” would you hire the person?
The no estimates movement has been incredibly damaging for Software Engineering.
If work on a house was specified like a typical software project, no builder would even return your call.
"I'd like to have my roof reshingled, but with glass tiles and it should be in the basement, and once you are half way I'll change my mind on everything and btw, I'm replacing your crew every three days".
Sure, for roofing jobs or other large repairs, that’s true. But for remodeling it’s pretty different.
When I’ve engaged with a contractor for remodeling, I usually have some vague idea like “we should do something about this porch and deck and we’d like it to look nice.”
The contractor then talks to you about _requirements_, _options_, and _costs_. They then charges for architectural plans and the option to proceed with a budget and rough timeline.
Then they discover problems (perhaps “legacy construction”) and the scope creeps a bit.
And often the timeline slips by weeks or months for no discernible reason.
Which sounds exactly like a lot of software projects. But half of your house is torn up so you can’t easily cut scope.
Painting a wall has no “if then else”. You dont need to test to see if the wall has been painted.
I guess a fair analogy would be if the home owner just said “Make my home great and easy to use” by Thanksgiving without too many details, and between now ans thanksgiving refines this vision continuously, like literally changing the color choice half way or after fully painting a wall… then its really hard to commit.
If a home owner has a very specific list of things with no on the job adjustments, then usually you can estimate(most home contract work)
All software requests are somewhere in between former and latter, most often leaning towards the former scenario.
When there are huge unknowns, such as in the case of a remodel where who knows what you might find once the drywall is removed, then yes. I happily worked with a contractor on a basement renovation with no estimate for this exact reason.
If it’s something where they have fewer unknowns and more control and lots of experience building the same thing, then I would expect an estimate: building a deck, re-roofing a house, etc
For any slightly complicated project on a house the estimate assumes everything goes right, which everyone knows it probably won't. It's just a starting point, not a commitment.
Definitely so. Most business people that I've worked with do understand that. And provided problems are communicated early enough can manage expectations.
Where I've seen issues is when there is a big disconnect and they don't hear about problems until it's way too late.
These are just bad contractors. I used to work for a remodeling company. We came in under time on the vast majority of projects because the guy who ran the company knew what he was doing and built slack into the schedule.
Sales gets fired (or not paid) for missing their estimates (quotas, forecasts) and often have little empathy for engineering being unable to estimate accurately.
Really interesting topic. (I’m actually somewhere in between sales and dev - doing Req. Engineering, Concepts and planning).
Personally I consider it more important to constantly narrow down any uncertainties over time, than having an initial estimate that holds. The closer it gets to any deadline, the less uncertainty I want (need) to have because the less options remain to react to changes.
And frankly, this usually not only applies to estimates but also to things that these estimates rely upon. The longer the timeline, the more room for circumstances and requirements to change.
How else are you going to liquidity-stalk that company you left with some options or even shares?
I take my first cup of coffee with a little tea-leaf reading based on the activity of the CEO and my former coworkers. If you ever see more than 5 connections reacting/liking the same thing you know that HR or marketing sent out an email about it.
I feel some genuine grief about what GTK has become.
It started out as a toolkit for application development and leaned heavily into the needs of the C developer who was writing an application with a GUI. It was really a breath of fresh air to us crusties who started out with Xaw and Motif. That's the GTK I want to remember.
What it is now is (IMO) mostly a product of the economics of free software development. There's not a lot of bread out there to build a great, free, developer experience for Linux apps. Paid GTK development is just in service of improving the desktop platform that the big vendors ship. This leads to more abstraction breaks between the toolkit, the desktop, and the theme, because nobody cares as long as all the core desktop apps work. "Third party" app developers, who used to be the only audience for GTK, are a distant second place. The third party DX is only good if you follow a cookie-cutter app template.
I switched my long-term personal projects from GTK2 to Dear ImGui, which IMO is the only UI toolkit going that actually prioritizes developer experience. Porting from GTK2 to GTK3 would have been almost as much work since I depended on Clutter (which was at one point a key element of the platform, but got dropped/deprecated -- maybe its corporate sponsor folded? not sure).
I am still skeptical about the value of LLM as coding helper in 2025. I have not dedicated myself to an "AI first" workflow so maybe I am just doing it wrong.
The most positive metaphor I have heard about why LLM coding assistance is so great is that it's like having a hard-working junior dev that does whatever you want and doesn't waste time reading HN. You still have to check the work, there will be some bad decisions in there, the code maybe isn't that great, but you can tell it to generate tests so you know it is functional.
OK, let's say I accept that 100% (I personally haven't seen evidence that LLM assistance is really even up to that level, but for the sake of argument). My experience as a senior dev is that adding juniors to a team slows down progress and makes the outcome worse. You only do it because that's how you train and mentor juniors to be able to work independently. You are investing in the team every time you review a junior's code, give them advice, answer their questions about what is going on.
With an LLM coding assistant, all the instruction and review you give it is just wasted effort. It makes you slower overall and you spend a lot of time explaining code and managing/directing something that not only doesn't care but doesn't even have the ability to remember what you said for the next project. And the code you get out, in my experience at least, is pretty crap.
I get that it's a different and, to some, interesting way of programming-by-specification, but as far as I can tell the hype about how much faster and better you can code with an AI sidekick is just that -- hype. Maybe that will be wrong next year, maybe it's wrong now with state-of-the-art tools, but I still can't help thinking that the fundamental problem, that all the effort you spend on "mentoring" an LLM is just flushed down the toilet, means that your long term team health will suffer.'
> And the code you get out, in my experience at least, is pretty crap
I think that belies the fundamental misunderstanding of how AI is changing the goalposts in coding
Software engineering has operated under a fundamental assumption that code quality is important.
But why do we value the "quality" of code?
* It's easier for other developers (including your future self) to understand, and easier to document.
* Easier to change when requirements change
* More efficient with resources, performs better (cpu/network/disk)
* Easier to develop tests if its properly structured
AI coding upends a lot of that, because all of those goals presume a human will, at some point, interact with that code in the future.
But the whole purpose of coding in the first place is to have a running executable that does what we want it to do.
The more we focus on the requirements and guiding AI to write tests to prove those requirements are fulfilled, the less we have to actually care about the 'quality' of the code it produces. Code quality isn't a requirement, its a vestigal artifact of human involvement in communicating with the machine.
I don't know how they get their sources, but it would be nice if it was directly from coding documentation (and not random stackoverflow answers) and if those guides were I don't know, more machine readable? (That's not a passive aggressive use of question marks, I'm genuinely just guessing here)
I don't know that the hardware is dead yet. They got a cash infusion last year and there are occasional hardware updates in their Discord. It's just a slow process with 1-2 engineers total working on the many different hardware and software and firmware elements of the overall product.
Yeah - last update on the web page was, what, December? I think they're going to get outrun by the rest of the market. "Walking dead", possibly. If I can get NUC+XReals+some sort of integrated desktop then they'd need something really compelling to make their headset worthwhile at the price they're aiming at.
There's no hurt feelings like the hurt feelings of a junior engineer, who has spent the last year kvetching about how much they hate working on legacy junk, hearing someone else refer to one of THEIR projects as "legacy junk".
Any code that's old enough to have its first birthday party is "legacy", which means that "legacy" is a completely useless category. Anyone calling anything "legacy" is generally just showing their own lack of experience.
1 year old code is not legacy. Legacy is a useful category. COBOL is legacy. Maybe it is unclear exactly where to draw the line but if that were a valid reason to discard conceptual categories we wouldn't have any.
Heh, so true in many cases. I had rewritten a utility that scanned a directory and moved files to s3 from Perl to Go and eventually a different team took over the code. A bug popped up and they were not confident to update the "legacy code." I could do not but chuckle to myself: this was like a fee hundred lines of well organized and fully unit and integration testable code with a readme and runbook and grafana/prometheus and structured logs aggregated in Splunk. And that has been running for like 2 years. They just didn't want to even attempt the fix and in fact pushed it off indefinitely.
IMO legacy code is code that has lots of if statements and special cases sprinkled in over the years to enable new features quickly and get out immediate hotfixes for bugs without doing full refactors and/or data migrations to address the root causes. Even with 100% test coverage it’s a pain to build new features in because there are so many paths to think through, and instead of single sources of truth each part of the app assumes every other part is working in very specific ways.
An alternative definition, legacy code is any code where there’s no one left on the team who has been working on it for years and intuitively knows the pitfalls. Then everyone’s scared to touch it or make big refactors, which actually leads to those small special cases being added instead.
I ran face-first into this effect at a successful startup where I started as employee number 9.
When everyone can sit around one big table, you don't have to consciously polish your "brand" all the time -- most people have direct experience with you and base their opinions on that. You do good work and you will have a good reputation. If you have a conflict with someone who is a jackass or have a project that fails to launch, people know enough about the context to judge pretty fairly.
When there are hundreds of people on the engineering team, especially in a remote-heavy workforce, most people don't have direct experience with you and can only base their opinion of you on what they hear from others, i.e. your reputation. This goes for peers as much as leadership.
You have to be aware of how an org changes over time, and how things that were once not important are now essential skills for success.. and decide if any new essentials are skills that you are interested in developing.
You can still buy bikes like that. There are plenty of people still making frame sets that will work with standard drive train components, standard sized stems, and plain ol handlebars in a variety of shapes. And they will build a bike for you.
I bought a Rivendell about 10 years ago and it's probably my last bike. Is a steel frame heavier than carbon? Yes, a bit, but I don't have to throw it away after a crash, it rides like a dream, and the weight difference is less than the extra "water bottles" I carry around my midsection. Most of the weight of the bike+rider (which is what you have to haul around) is the rider, not the bike, and the frame is just a fraction of the weight of the bike!
Even though new bikes are getting more and more proprietary, I don't foresee a time when I can't buy a new Shimano cassette or other replaceable parts.
It does seem like a complete bike that is under $1100 or so today will be less repairable than the bike I got in 2008 for $600 (less than $900 in 2024 dollars).
In some ways yes in other ways no. Shimano has been on their forced obsolescence train for 30 years. They don’t make hoods for my old 8spd levers. If I want to not deal with ratty old tape over sticky ancient hoods I need to drop $130 on new claris levers and $25 on a new fd because the pull ratio changed then another $20 on new bar tape.
You should check AliExpress for those. You might be able to find some knock-offs. AE is actually really good for things like this. The other place to check is Ebay, in case someone is selling NOS (new old stock).
I’ve tried both. They don’t make 8spd era hoods they do have some 9spd clones. NOS has n’t existed for years and when it comes up people ask $100 for a set of hoods.
I wonder if any of the companies making 9spd clones would be interested in making 8spd clones. It might not be worth it for them, but probably wouldn't hurt to ask, especially if people are actually getting $100 for a set when they dig up some NOS.
I tried that too. Talc only works for the first time you grip the hoods. If you then take your hands off and regrip you are back to where you started getting black crap everywhere. Tape has been the best solution if a bit ugly.
I worked for an Intel spinoff whose CEO was a former high-level Intel exec from the 1990-2010 era. Internal goss attributed much of Intel's decision to stay out of the iPhone to him... there was a supposed quote that went something like "we make chips for computers, not g*d** telephones!"
As the tale went, he was sent out to this doomed-from-birth spinoff as a "sunset cruise" to basically force him into retirement (for this bad decision) without the bad publicity of a public head-chopping.
When I think back to that time period (serving tables, T9 texting in my apron on my Blackberry Pearl lol) I remember the touch screen being a tough learning curve for the majority of people.
The first iPhone was also gigantic, hideous, couldn't send pictures - something even a cheap $20 Samsung from the carrier could do - and it also didn't sell very well. People were more into "The Google phone", the Sidekiq, or the latest Razr. Think it wasn't til the 3GS came out with a ton of marketing push that it started to gain popularity, and it ended up having more to do with the App Store than the hardware - people did not like those touch screens for the first several years of smartphones. They came out at the height of texting and ringtone era, and we were pretty set in our ways, and it took years to change that behavior.
I think the App Store resonated a lot more with people back then rather than the iPhone as a device. MySpace was still around, the Bush-era recession had everyone looking for a side hustle. Most young/ambitious people I was around in "tech" (which was effectively HTML-based SEO and WordPress design) had a Blackberry and a side business. This kid I worked with became "rich" from an iPhone app that just combined other iPhone apps haha. Loved that time period. Graffitio!
Would have been very hard to predict the success of the iPhone, even as I was already entering orders for customers on a fully touch screen Aloha point-of-sale long before iPhone.
You’re right. The first iPhone was so bad that “the Google phone” was delayed by many months as they scrambled to completely redesigned their launch phone to be multitouch instead of keyboard.
The razr2 sold 5M units and the sidekick sold 3M to iPhone 1’s 6.1M.
The n95 did outsell the iPhone with 10M units but Nokia had a massively more mature sales pipeline whereas Apple had to build out carrier relations. It also shipped before the iPhone was even announced which gave it time to accumulate sales.
Everyone in the space though recognized how big it was because carriers were going out of their way to try to get it on their network (since at the time Apple was doing 1 carrier per country). Apple got lucky that AT&T bought Singular which made the iPhone accessible to many many more people.
3GS’s 37 million units was because Apple had 2 years to build up manufacturing capacity and carrier sales channels to match demand for what had become clearly a smartphone revolution.
The big things I remember from that era is that the iPhone was the first phone with an unlimited data plan which Apple/Jobs beat AT&T into submission to get. Until then you had to worry about every last byte you used on the dinky carrier-grade apps and the lousy WAP websites.
This worry removed, and the fact you actually had a real browser that could open real websites were the two main features that to me seemed like a huge leap forward.
Its also clear that the carrier wasn't ready for it. People with the original iPhone would get entire boxes mailed to them for their Cingular statement, itemizing every data transaction, but then all flat rate charged.
I had an original iPhone and did not get such statements. As I recall while Cingular was indeed not ready to handle unlimited data for customers, it wasn’t really a problem for the first iPhone since it wasn’t 3g. Once the 3g dropped , it was a problem since people were actually able to consume a large amount of data.
> The n95 did outsell the iPhone with 10M units but Nokia had a massively more mature sales pipeline whereas Apple had to build out carrier relations.
Am I remembering correctly that originally the iPhone only supported AT&T? My family all was on a shared Verizon plan at the time, and I have a vague recollection of the fact that it wasn't an option for us, but I'm not positive I remember correctly. Nowadays, the idea of a phone not being possible to purchase for a given network seems silly, but I feel like it was a thing at the time.
Which is partly why the Motorola(?) Droid phones got so popular.
Verizon couldn’t have the iPhone, so they pushed that as their equivalent with a huge marketing campaign. Always felt to me like that single-handedly pushed Android into the public consciousness.
Like without that maybe Android would be huge, but we updated have gotten the name recognition. Or at least not so fast, and instead would have been more of an implementation detail in most people’s minds.
Sprint tried to compete with the Palm Pre which was neat but had issues. Then Verizon got the Pre and Sprint’s last shot at relevance died.
Yeah it was only available on AT&T and you couldn't even send pictures (MMS). Hindsight is 20/20 - the people in this thread are overly glamorizing Apple and smartphones in general now because it's easy after something was already successful (the number of influencers on this post is vomit-inducing). In reality smartphones were a tough sell at first, people didn't like the touchscreen at first, it took a ton of marketing push to get people used to them that took years. I was 21 years old when iPhone came out and remember it perfectly clear - also just google sales data. iPhone was selling single digit millions in the first years, nowhere near the 100+ million Razrs sold, or the amount Nokia or RIM were selling.
"But they were new to the space!" "It was still a good start!" is peripheral
According to a quick search the 2006 Razr sold 50 million units (130 million total) but the first iPhone only sold 1.4 million when it released in 2007. iPhone was nowhere near Razr sales. Your comment is a bit disingenuous because that was the less popular Razr after flip phones were on the way out.
Nokia and Blackberry were selling a lot more than both brands at the time. Blackberry alone had 20%+ market share when the Pearl was released. Nokia sold the most phones by far.
You’re comparing unrelated things and completely discounting that Apple was a completely new entrant into the space vs entrenched players that had already established sales channels and carrier relations and were selling globally vs US-only to start for Apple.
Apple sold over 6M units of the first iPhone unless you’re saying 1.4M in the first quarter after launch since it launched in September. I was comparing it to phones released at a similar time and claiming that sidekick was somehow more successful is straight up laughable regardless of how you look at it.
The first iPhone defined what the smartphone category should be. Google took heed which saved Android. Nokia and blackberry did not and you can tell where they’re at now.
> Apple was a completely new entrant into the space vs entrenched players that had already established sales channels
So? If anything it supports the point that iPhone wasn't an immediate success. It wasn't as successful as the Razr flip phone was before it, not for a while. The original point was just that it would have been difficult to predict the success of iPhone even after it released, because it didn't do that well at first.
> In its first week, Apple had sold 270,000 iPhones domestically.[47] Apple sold the one millionth iPhone 74 days after the release.[48] Apple reported in January 2008 that four million were sold.[49] As of Q4 2007, strong iPhone sales put Apple no. 2 in U.S. smartphone vendors, behind Research In Motion and ahead of all Windows Mobile vendors.[50]
> As of October 2007, the iPhone was the fourth best-selling handset in the U.S., trailing the Motorola RAZR V3, the LG Chocolate, and the LG VX8300.[51]
I’d say being the #2 smartphone vendor on your first model and beating all Windows Mobile would be a fantastic first step and a clear indicator it would be extremely successful in addition to being #4 across all us handsets. 4M were sold in the first 6 months. You’re just simply misremembering how big it was and how well it sold.
The 3G version the next year sold 4x as many units (25M) . You could say it’s because 3G was such a huge upgrade our other software things like mms and App Store, but the first iPhone was in 6 countries while 3G was in 22 and got to 70. The 3GS did outdo the growth of that since it shipped 5M more units than would be explained just by being available in more countries (80 vs 70).
Could you predict it would end up dominating the smartphone market even as that market ate up more of the legacy feature phone market? Maybe that’s harder but the iPhone’s success wasn’t all that hard. The lack of any competition that could really keep up was another indicator. Android was a pretty big failure for a few years until enough of the feature set became parity and table stakes that people felt comfortable using it (or because Android was available in a lower price segment Apple wasn’t competing in).
It's not like Apple was some scrappy startup going up against giants, when their profits had been in the billions for years prior to the iPhone, which by the way followed the MacBook launch in 2006.
> strong iPhone sales put Apple ... behind Research In Motion
Are you aware that RIM is Blackberry? This is what I've been saying, iPhone was behind the others like Blackberry, Razr, Nokia, etc.
> As of October 2007, the iPhone was the fourth best-selling handset in the U.S., trailing the Motorola RAZR V3, the LG Chocolate, and the LG VX8300.
Yep. 1 million is a lot, but Razr was selling 10s of millions, Blackberry and Nokia were selling more than that. But yeah I guess iPhone had the super cheap carrier phones beat, but they weren't meant to be high end products.
> The 3G version the next year sold 4x as many
Yeah that's when it started to take over. Especially when the 3GS / $99 AT&T plan came out, everyone got it by then.
Predicting hits: What about bluetooth headsets? They were forcing them on us around the same time period we're talking about now... but only a few geeky dads and cheesy business guys used them, eventually they weren't really sold anywhere, comedians had been referencing them at this point. Having lived through that era I would have never predicted the success of Airpods. As I said in my above post it wasn't like we didn't have touchscreen devices. I don't recall being as receptive to the touchscreen as you do, seemed like most people outside the Apple cult initially hated it.
> It's not like Apple was some scrappy startup going up against giants, when their profits had been in the billions for years prior to the iPhone, which by the way followed the MacBook launch in 2006.
First I think you may be misremembering your Apple history. The iPhone was very much Apple’s first product release post their iPod success which rescued their company. While they weren’t a failing company that they were in 2001 (like near banrukpt failing) they were not by any means a behemoth. In 2006 they made 2B in profit on 10B of revenue. By comparison Nokia made $7B in profit. Looking at MacBook belies that you seem to not realize how insignificant those sales were to Apple’s revenue (and how that’s even more true today even though they’ve actually grown their market share in that segment).
> Are you aware that RIM is Blackberry? This is what I've been saying, iPhone was behind the others like Blackberry, Razr, Nokia, etc.
Except not. Razr isn’t a smartphone and iPhone outsold any smartphones that Nokia made. By 2007 when Apple shipped the iPhone RIM had been shipping the blackberry for about 8 years and had completely taken over the enterprise segment which until Apple cracked it was considered to be the only place that smartphones would be successful and that it would perennially remain Blackberrie’s to lose. To have a competitor who’s never done cellular or smartphones before take #2 from the get go is huge considering how unlike traditional consumer electronics that Apple had engaged in until then, the sales channel and regulatory environment looked completely different. Think about it - one huge innovation they did was that you could buy the cell phone directly from them and through the Apple Store and they took care of the carrier onboarding experience. No one else attempted (or even could attempt) to do that.
You’re simply misremembering or trying to paint a weird picture that the first iPhone was this niche device no one wanted. That’s literally not true. It’s inherently impossible to enter a mature market and become #1 overnight. That Apple came in at #2 is really astounding and everyone was paying attention to it and Google literally hit pause on their launch by a year to completely redesign their OS because they saw it as the future.
> Having lived through that era I would have never predicted the success of Airpods.
Well I worked at Apple before they launched AirPods and got to see an exec demo of them. From the first instant I knew they were going to be a hit. Did I know it was going to be a multi billion dollar business by itself? If I’d done the math on it I probably could have worked it out just from estimating an attach rate. I think you shouldn’t extrapolate your inability to predict hits to others and say no one saw things coming.
As for touch screens, we actually didn’t have capacitive touch screens. All the smartphones to date had been resistive and the introduction of multitouch that capacitive enabled as well as better scan rates made a huge difference. I think you’re outing your viewpoint when you’re discounting people who were enthusiastic about the iPhone as members of a cult even now without allowing for the possibility that maybe they see something you don’t. Same kind of reasoning happened with the iPod too and I made the same mistake thinking they wouldn’t be big and it was this weird Apple thing and the UX seemed weird until they fixed their strategy to open it up to Windows users. My excuse was that I was still a teenager so I didn’t have sufficient perspective. Btw not everything Apple touches is gold immediately. I think they’re going to struggle with Vision Pro. I think they did a bunch of novel interesting UX innovations but the “killer product” hasn’t been built in that space yet and Meta is a much savvier opponent than they have ever had to face to date in a new product line.
> MacBook belies that you seem to not realize how insignificant those sales were to Apple’s revenue
Unless we look at the actual data (i.e. Mac revenue was always bigger and was a actually growing at very fast pace unlike iPod by the time the iPhone came out)
I think you're making some key important reasoning mistakes. First, look at it as a percentage of annual revenue (numbers from ChatGPT so may be off somewhere):
2001 (ipod initial launch at end of year): 4.1B/5.36B, 76% Mac, <1% iPod
2002: 4.3B/5.74B 74% Mac, 2.5% iPod
2003: 4.9B/6.21B 79% Mac, 20% iPod
2004 (Windows support added late 2003): 5.3B/8.28B 64% Mac, 21.7% iPod
When I say it's "insignificant" I don't mean to say that Apple could have cancelled it and it wouldn't have mattered. Mac still remains a meaningful pillar of their product lineup even though it only contributes ~10% of revenues.
What you have to do is consider that Apple leadership views it as an ecosystem. Mac by itself isn't a lucrative or really important business. However, it's importance is that it makes sure that a customer in their ecosystem always has an Apple product they can buy when they need something. Importantly, if they have an iPhone they're more likely to buy a Mac and if they have a Mac they're more likely to buy an iPhone (& now Watch, AirPods etc). The refresh rates for these are also different enough that you're likely to remain stuck there by default once you get into the ecosystem because it's just an easier experience.
What I'm saying is that the strategic focus and resources was not really on Mac because Apple leadership did not see growth there by itself unless it was as an attachment to the iPod. You can see in the % numbers where iPod took over Mac as contributing a huge portion of % to their bottom line as soon as they made it generally available and that Mac sales themselves only started going up like crazy once iPod became generally available to everyone. Similarly, once the iPhone comes out we see it crazily cannibalizing iPod sales. At that point strategically the iPod barely got any attention. They didn't cancel it until 2022 because it was still bringing in significant revenue streams (+ a form factor Apple didn't have a replacement for until they got the Watch). Additionally, the overall laptop market has been shrinking even as Apple has been growing which is why their marketshare in laptops is so large even though it's comparatively such a small product for them.
So while the revenue from Mac was important from a "keep working on this" perspective & "ecosystem play", the vast majority of resources, focus, and energy were definitely thrown at iPod & then iPhone because of how much bigger the opportunity was and that even for Mac iPod and iPhone were the flywheel engines driving growth in those spaces.
If you're taking "insignificant" as the cancellation point for Mac, I think it would be that they succeed in their pitch that the Vision lineup is a Mac replacement. If they manage to succeed in that product line, Mac won't be much longer for this world.
He shows real data, you use ChatGPT, modify the timeline plus admit the numbers are off - yet you say he's making the mistakes?
High level: You're trying to make the case that iPhones were immediately more successful than the conventional market leaders like Razr and Blackberry when that's not the case by far - it wasn't true for several years.
When someone shows that's wrong with numbers (e.g. 130 million Razrs sold in 2 years vs iPhone's 6 million in 2 years) you say something like "yeah but they were new to the space!" But that's totally peripheral, and counter to your original claim.
You'll never convince those of us who were 18-24 years old when iPhone released what happened. You obviously don't have a clue, probably were a child or out of the country at the time because you're using ChatGPT to pull up (false) info we all know intuitively.
Other awful takes:
> Macbook sales were insignificant
False.
> Mac by itself isn't a lucrative or really important business.
Lol. Saying $10 billion a year is not lucrative is crazy. Saying that 30%+ market share on the laptop market is not important is crazy.
> Once the iPhone comes out we see it crazily cannibalizing iPod sales. At that point strategically the iPod barely got any attention.
Nobody ever compared iPhone to iPod - we were talking about feature phones of the day like Razr, Blackberry, Nokia, etc.
To be fair using only Q4 figures was a mistake, since iPod sales were always the highest in Q1 because of the holiday season (not as noticeable for Macs).
> ChatGPT to pull up (false) info we all know intuitively.
I summed some of those years from Apple's Quarterly reports (annoyingly they didn't seem to report by segment FY sales...) and they are more or less similar:
2006 :
Mac : $7,375 (49.01%)
iPod+iPhone : $7,676 (50.99%)
2007 :
Mac : $10,314 (55.02%)
iPod+iPhone : $8,428 (44.98%)
2008 :
Mac : $14,276 (56.48%)
iPod+iPhone : $10,997 (43.52%)
2009 :
Mac : $13,824 (43.93%)
iPod+iPhone : $17,657 (56.07%)
I don’t think identity is relevant but since you’ve made it an issue, I was in fact an intern at Apple working on the first iPhone when it launched and continued working on mobile phones for a long time after (eg worked fully time on WebOS with a bunch of iPhone veterans since a large contingent of them started it). You may want to be careful about the blind assumptions you’re making and degrading into ad hominem attacks is beneath the standards for this site.
As for the ChatGPT dig, for what it’s worth I spot checked various numbers that they were consistent with other sources and Apple’s official Q4 numbers. If you can point out specific issues I’d be happy to correct. I called it out simply in case someone wanted to triple check the numbers. But again, please refrain from baseless attacks and point out actual errors in the facts presented if any.
I pointed out that Apple on initial launch was the #2 provider outselling ALL windows mobile manufacturers including Nokia. They were widely recognized as completely reinventing the smartphone market even at the time which you can tell because Google had an “oh shit” moment with Android to competent rethink the OS to build it around multitouch. I don’t know what else to tell you. This is based on reporting and direct anecdotal conversations I have had with Google and Apple coworkers that related that history to me contemporaneously.
The successful Razr version was released a year before and Motorola was a very mature cell phone company that basically introduced phones worldwide right away similar to how Apple does phones today and was not a smartphone. It’s also important to remember that was actually the first “Apple” phone since it was integrated into the iTunes experience (ie if your wanted iPod + cell phone). So if anything, a feature phone at the same price point as Apple a year prior selling like hotcakes only proves that it was clear iPhone was a big deal.
> Lol. Saying $10 billion a year is not lucrative is crazy
It is crazy but that’s less than 3% of 2023 revenue (in 2023 it’s still about 10% at 30B but down 27% from 2022).
It’s a strategic product to build the Apple ecosystem but tactically it’s not where they make their money and sales and focus is. That’s why you see them still making Apple TV’s for a fraction of Mac revenue (watch + AirPods + tv + HomePod is $9B and the majority there is going to be watch and AirPods).
Also if you actually follow the space you’d know the reason that Apple has a significant portion of the laptop market is because the market itself has stagnated and shrunk because smartphones and tablets have eaten it. In the same time period since 2007 that Mac revenue have taken to double Apple's overall revenues have 10x. I gave context from before in 2001 to show that Mac sales were stagnating and not at all a way that Apple would survive and once they knew how big iPod was they knew their future was not Mac. This by the way is straight from the horses mouth - Steve Jobs was one of presenters for interns that year.
As for the strategic hypotheses, keep saying it’s conjecture all you want but please be aware I’ve had conversations with people who were within the company including some senior leaders. So while it may be wrong I suspect my conjectures may have a slightly larger chance of being correct than someone who remembers the iPhone as some “also ran” phone that no one could predict was going to be that big and forgetting all the lines at stores even it launched and so the constant non stop press it had for years even post launch.
I’ll reiterate - if you have issues with my facts or conjecture, please actually point out specifically which facts are wrong or provide news articles or analysis contradicting what I’ve stated. Your recollection of how big a moment iPhone was in popular perception and in the tech industry is wrong or you weren’t paying proper attention or were in the wrong community that was on the periphery of everything happening.
> I was in fact an intern at Apple working on the first iPhone when it launched
> I’ve had conversations with people who were within the company including some senior leaders
There we go, why didn't you just say that in a disclaimer up front? At one point I cursed the Apple influencers and was mainly referring to you :D
You had a very different experience than the mainstream since you worked at Apple when it launched.
> This by the way is straight from the horses mouth - Steve Jobs was one of presenters for interns that year.
Sounds like you were doing keg stands in the Apple Koolaid while most of us were still pirating Windows Vista off LimeWire and changing discs at red lights! You haven't lived unless you had a 6-disc changer (in your trunk for some reason).
Yeah picking Q4 in my previous comment wasn't fair (Apple didn't seem to release FY revenue by segment which is a bit annoying and iPod sales were generally much higher in Q1
No argument about what happened after the iPhone (specifically 3G + App Store) came out, but I'm not sure I fully agree with:
> What I'm saying is that the strategic focus and resources was not really on Mac because Apple leadership did not see growth there by itself unless it was as an attachment to the iPod.
By 2007 iPod's market share was ~72% in the US. The MP3 player market was pretty saturated and there was very little growth left (especially with increasing competition from (feature)phones). On the other hand Apple only had < 5% of the PC market (in the US) by 2006 so there was a lot of space to grow especially in the laptop market if if they started released more competitive products (by ditching PowerPC).
If we look at iPod + iPhone revenue around those years
2007 :
Mac: $10,314 (42.94%)
iPod+iPhone : $8,428 (35.09%)
2008 :
Mac: $14,276 (43.92%)
iPod+iPhone: $10,997 (33.82%)
Mac sales were actually growing faster even if we combine iPhone and iPod sales (which probably meant that a lot of people switched to other phones/mp3 players instead of buying an iPhone at least initially)
I think that's mainly related to the the PowerPC to Intel transition. It's not clear if the iPod really had a huge impact on Mac sales since Mac's market share started growing much faster when iPod had already peaked. Then after iPhone sales started accelerating Mac sales growth rates began declining which would imply that handheld and PC segments aren't necessarily related that much.
If Apple hadn't released the iPhone, Mac probably would have done just fine on its own and iPod sales would have remained stagnant or declined significantly (e.g. worldwide they weren't doing that well compared to Sony Ericsson's phone sales in 2006-2007, who IIRC leaned heavily into MP3/media in those in those years). It was pretty obvious that phones/smartphones were the future regardless of what Apple did (the transition would have just been quite a bit slower without them).
Of course (compared to you) I really have no clue what the internal sentiment inside Apple was a at the time so I'm just commenting on the market as a whole.
My point about iPod sales was that Mac was fairly stagnant before and after in gross revenue. Once iPods were made compatible with windows, you see Mac sales start to grow again. The strategic story was that iPod was your gateway drug into the Apple ecosystem - whether they knew it would be that way I don’t know. I suspect it took them by surprise and they weren’t sure at first if Windows support was needed or growth would continue on its own.
As for iPod saturation, I don’t know if that’s actually true and I’m too lazy to look up the numbers as to when it happened. I’ll point out two things area important when thinking about things - you have market share and market size. You don’t have saturation until you have stopped growing. Owning a constant 70% of a market that’s growing consistently each year is not saturation - was the portable music market stagnant by 2007? Maybe. But you can’t tell that just from market share. According to ChatGPT growth started slowing down in 2008 but that’s already post Razr and iPhone when it became an obvious calculus of do I want and old tech iPod or a general purpose computer and cell phone in my pocket. Nanos and shuffles kept selling well because the cell phones didn’t have an appropriate answer for a very very long time (too much growth elsewhere to bother with that use case).
The transition to Intel was in 2006. I’m skeptical that’s a motivating reason for a lot of switchers and the numbers show more growth correlation with iPod windows support and iPhone which makes sense for the attachment theory whereas I don’t see in the numbers that making a big dent. It’s important to remember that consumers don’t make decisions on what CPU is in a machine; even technical people wouldn’t make that decision since even in the tech community today it’s framed as windows vs Linux vs Mac and not Intel vs AMD unless you are building a PC from parts (and Apple isn’t in that segment).
I’m not sure why you say iPod has peaked at the time of Windows transition which was in 2004. 2007 which is when growth had started slowing is when the iPhone kept up the growth. I agree that phones being the future was going to be obvious and the Razr was likely their first dip of a toe to estimate how big the market for an iPhone would be since they had a deal with Motorola for iTunes support.
As for Apple’s computer market share, the reason it’s so high is that the overall market is stagnant but Apple is managing to eke out growth here and there. Same as with smartphones where Apple has 30% of market share but 98% of the profit; they are much more streamlined than their competitors. Expect to see their market share grow one cell phones stagnate although the story there is more complicated since they play in a wider range of economic households whereas Mac products are (as they’ve always been) in more higher end segments (since the lower end would be dominated by chromebooks or no laptop). I see no indication of Mac growth stopping pre or post iPhone - it only started more recently which makes sense it would take so long because Apple is not immune to the reality in that space even though they’ve defied it for so long (and as for many companies COVID spiked sales since kids needed to do school remotely so the recent stagnation could just have been a temporary anomaly and reversion to pre pandemic numbers).
It’s genuinely hard to say what would have happened if Apple had not released the iPhone. Certainly Apple had stabilized as a company by 2001. But Mac sales had stagnated in the years prior to iPod and without iPhone as another engine it’s unclear what would have happened to Mac sales after that especially as the overall laptop and computer segments started collapsing (but this happened much much later after the smartphone revolution). Their music business may have helped a bit but a huge part of growth there itself was due to iPhone as well. These are reinforcing effects that are really hard to tease out. It’s likely it would have kept going since it was financially more stable but it would likely be at least a 10x smaller company.
Thank you for engaging factually and thoughtfully with the analysis.
> Owning a constant 70% of a market that’s growing consistently each year is not saturation - was the portable music market stagnant by 2007?
iPod sales grew by 18% in 2006 and 17% in 2007. It was 140% 2006 and 373% in 2005. So yeah they weren't technically really stagnant just slowed down significantly. Even if we look at combined iPod and iPhone sales they grew slower than Nokia and Sony phone in the same period (Motorola had peaked in 2006). That didn't change until the 3G came out in 2008 and "high-end feature phone" market collapsed by 2009.
> It’s important to remember that consumers don’t make decisions on what CPU is in a machine
Yes but IIRC G5 had pretty awful performance per watt and Apple never put it into any laptops. They were stuck with G4 which wasn't really competitive with x86 by 2006.
> 2007 which is when growth had started slowing is when the iPhone kept up the growth.
It didn't initially though, not until mid 2008 if we compare to how fast Mac sales grew in the same period. Between 2006 and 2009 Apple's laptop sales increased by almost 3x or so. While even if we add up iPod + iPhone sales they "only" grew by 2x. Especially 2007 to 2008 was relatively pretty bad since iPod + iPhone sales only went up by 1.3x. Macbooks were the fastest growing product/segment between mid 2006 and mid 2008.
To be fair I'm mostly nitpicking at this point since I do agree with your longterm analysis more or less, but since Mac sales grew at the fastest during the period when iPod sales were relatively stagnant and iPhone sales were still relatively very low (2007 to mid 2008 before the 3G, the original iPhone wasn't a particularly good smartphone it had a large multitouch screen and that's about it..) it's not that obvious to me that iPod/iPhone sales were driving Mac sales that much.
Of course that specific period was pretty unique. Laptop market was growing very fast and Apple finally was offering devices which were incredibly competitive with other laptops at the time. It doesn't change the big picture too much (most growth was coming from iPod sales before that and iPhone/iPad afterwards)
To be honest, I think you're badly misremembering that era. People were calling the iPhone "the Jesus Phone" after the original keynote announcement, the lines on launch day were around the block, and upon release, there were definitely tons of flame wars around physical vs. touchscreen keyboards but there was a widespread consensus that the iPhone's predictive typing correction was pretty good and that the touchscreen was miles ahead of any similar-equipped phone (many of which were still resistive, ugh!).
It definitely didn't get mainstream popularity until the 3GS/4 and the App Store, but people were definitely interested from day 1. Don't forget that early builds of Android looked much more like a Blackberry clone until the iPhone was announced, and then Google immediately scrapped everything and rewrote it from the ground up to be iPhone-like.
It just proves that no matter what you make and how innovative a product it is someone will come along a decade later and claim it was nothing but marketing fluff.
Gruber said it best (to paraphrase): Apple does things that are derided at launch, and then eventually becomes so commonplace that people think it was obvious.
The iPhone is that. I used to deride it at launch when I had my Sony P1, but it was truly a revolution. Anyone denying its success looking back,even as a v1, is living in a bubble.
> I remember the touch screen being a tough learning curve for the majority of people. people did not like those touch screens for the first several years of smartphones. They came out at the height of texting and ringtone era, and we were pretty set in our ways, and it took years to change that behavior.
Touch is probably the single most intuitive and easy to use interface ever created for computing. It is so obvious even a 5 year old child understands it without being taught (as both my kids did). Very very very few people found touch to be a significant learning curve at all so I'm not sure where that idea comes from.
There was definitely a whisper campaign from the tech talking heads that people wanted keyboards and touch screens were not the future... but that was mostly from people who never used it or competitors who had nothing to compete with.
> The first iPhone was also gigantic, hideous, couldn't send pictures - something even a cheap $20 Samsung from the carrier could do
It wasn't much larger than competing "smart" phones at the same and the large size was in fact a huge selling point: a bigger screen to see more content in apps and on websites.
MMS support was missing for sure but was also rolled out as a software update to all existing customers for free. The first time AFAIK that ever happened in the cell phone game. Prior to that (and still in Android land) updates require carrier cooperation and manufacturers did not hand out free features - buy a new phone for that.
> and it also didn't sell very well. People were more into "The Google phone", the Sidekiq, or the latest Razr.
I would say it sold quite well and exceeded expectations. In the iPhone announcement Apple said they'd like to sell 10 million of them by 2008 and sold 13 million. Sales roughly doubled the next year (2009). And the next (2010). And the next (2011). And the next (2012).
> I think the App Store resonated a lot more with people back then rather than the iPhone as a device.
I don't agree it resonated more than the device at that time but it was certainly an extremely important milestone. Apps, especially games, definitely drove a lot of adoption.
> It definitely didn't get mainstream popularity until the 3GS/4 and the App Store
Yeah that's all I was saying. I've used iPhone exclusively for 15 years and am not anti-Apple, but people cannot handle nuance in online forums, so they they come out swinging if you share any experience that doesn't 1:1 reflect the brand marketing of the company.
Second reason it didn't take off faster was that iPhone was typically a carrier exclusive, and in most markets, the iPhone carrier was typically one of the smaller carriers. So iPhone wasn't available to most mobile phone users in a given market, unless they went to the trouble of switching carriers.
If you were around when the first decent iPhone released (2008) you'd remember that everyone had one for $99 on AT&T, where non smartphones were still like ~$300, so I'd say you have it backwards.
Maybe you weren't old enough, or are from another country. Everybody remembers this, it was a great idea on their part as it got everyone using iPhones.
> I think the App Store resonated a lot more with people back then rather than the iPhone as a device
You are forgetting, there was no App Store when the iPhone launched. Apple was originally against the idea of Apps. The App Store launched a year later, with the iPhone3G. Yet the iPhone was wildly popular not just from the day it launched, but from the day Apple first demoed it.
I think what caught people's attention with the iPhone is that it's one of the first phones where you could easily browse websites from your phone in a way that didn't feel like a gimmick.
Other phones had web browsers, but they only really worked on special mobile versions of websites. They were also slow and painful to use even on those mobile optimised websites.
Then iPhone came along, and Apple had someone managed to squash a full desktop web browser on it. It did a half decent job of reformatting desktop-only websites to fit on the screen. When pages didn't reformat, the new touched-based panning and pinch-to-zoom gestures allowed you to still experience them with ease.
And Apple managed to make the whole OS feel responsive, despite using the exact same hardware as their competitors.
> Think it wasn't til the 3GS came out with a ton of marketing push that it started to gain popularity, and it ended up having more to do with the App Store than the hardware - people did not like those touch screens for the first several years of smartphones. They came out at the height of texting and ringtone era, and we were pretty set in our ways, and it took years to change that behavior.
This was not true per my memory. Literally everyone I knew in their 20s and 30s in NYC was switching to the iPhone 3G as of summer 2008. You had to wait in line to buy it for months.
Unlimited 3G + GPS + touchscreen browser was what made it explode. Broadband internet, in your pocket, with mapping capabilities. It felt like the possibilities were limitless, with new apps and functionality being discovered and released daily.
Yea it definitely felt like at the time Intel was in a tough spot.. while the article is stating that the narrative about the margins being narrow for arm chips so Intel was nervous about the area, one thing I can attest to is that even if that narrative was false when it comes to Apple, competitors of Apple at the time definitely would be assuming that if Intel went into that space that they gave Apple a good price, so just the perception of them selling a good chip to Apple could have hurt Intel's fat profit margins. Like Apple was selling phones for the same price Intel was selling chips for -- so maybe the higher cost could be justified by Apple who knows, but the perception of Intel's profit margin they are willing to live with impacts their negotiations with other customers dramatically as well.
The point here is that there is tons of money to be made in manufacturing those chips that Intel lost out on by ignoring mobile. That “Apple makes their own chips” does not alone mean Intel could not have profited from this — as indeed TSMC, ASML, and ARM are.
This analysis misses kinda a major point. The nice thing about making mobile SOCs is that the yields are more forgiving. If you have 1000mm^2 of silicon that you're slicing into 10mm^2 dies, each defect (of which there will be many on early leading nodes), will only cost you a 10mm^2 chip, instead of a 50mm^2 chip. And because you're making them in volume, you get many many cycles to improve your process before you try to make bigger chips with it, ie: CPUs/GPUs. And because Apple wanted the increased performance/battery life, they reserved capacity from TSMC on these leading nodes in advance, helping to finance their development while providing a garunteed customer that was going to buy in volume. This gave TSMC a huuuge advantage over Intel that materialzied around ~2018.
That is actually a good point - the fab process would turn out to become another weak point (besides the missing moat of ARM) of Intel. There is no way Apple would watch TSMCs leadership year after year and still stick with Intel fabs.
Apple hates being beholden to one company. It’s bitten them many times in the past.
I suspect if anyone else could keep up with TSMC Apple would still dual source.
But between their lead and the benefits of being such a big customer they get first crack at the cutting edge stuff, single sourcing manufacturing probably makes the most sense.
And their interests are aligned, unlike their dependencies on MS/Adobe/Motorola/IBM/Intel.
Well volume is an issue. Even a low margin part which has very high volume sales can help you afford to build leading edge Fabs, and keep process technology leadership.
Apple can only do that now, because of the billions they have made on the iPhone, plain and simple.
And - more importantly - they were well geared to do that, since the beginning of computing.
There being no money in SoC's, is because they're all being made across the other side of the planet, mostly, from the intended final users.
If Intel were really 'leading edge', they'd have made desk-side custom fabrication a thing in the makerspace already. Such that I can, as a computer user, print 10 or 20 or X little chips, for my own specific purposes, non-mass-market.
This would be a truly revolutionary adventure from a 'grandfather of computing' style company.
Alas, the x86 is, indeed, everywhere. Grandfather Intel has a massive garden.
If only the SoC battles were truly localized, and a real computing revolution can happen (before its too late).
You should have a locally-built device in your hand.
They would much rather confidently repeat a date that is totally unfounded rubbish which will have to be rolled back later, because then they can blame the engineering team for not delivering to their estimate.
reply