A dozen or so well-resourced tech titans in China are no doubt asking themselves this same question right now.
Of course, it takes quite some time for a fab to go from an idea to mass production. Even in China. Expect prices to drop 2-3 years from now when all the new capacity comes online?
At that point, it'll be the opposite problem as more capacity than demand will be available. These new fabs won't be able to pay for themselves. Every tic receives a tok.
According to my research, these machines can etch around 150 wafers per hour and each wafer can fit around 50 top-of-the-line GPUs. This means we can produce around 7500 AI chips per hour. Sell them for $1k a piece. That's $7.5 million per hour in revenue. Run the thing for 3 days and we recover costs.
I'm sure there's more involved but that sounds like a pretty good ROI to me.
Rent a warehouse in one of the non Han dominated areas of China, where you can use all you want from the city's drinking water supply and pump all your used chemicals into the nearby river. Make sure to totally automate your robotic production line so you don't need to employ any locals.
The catch is if you started today with plenty of money (billions of dollars!) and could hire the right experts as you need them (this is a big if!) there would still be a couple years between today and producing 150 wafers per house. So the question isn't what does the math look like today, it is what the math looks like in 2 years - if you could answer that why didn't you start two years ago so you could get the current prices?
A photolithography machine doesn't etch anything (well, some EUV machines do it as an unwanted side effect because of plasma generation), it just patterns some resist material. Etching is happening elsewhere. Also, keep in mind, you'll need to do multiple passes through a photolithography machine to pattern different steps of the process - it's not a single pass thing.
Especially when the plan is to just run them in a random rented commercial warehouse.
I drive by a large fab most days of the week. A few breweries I like are down the street from a few small boutique fabs. I got to play with some experimental fab equipment in college. These aren't just some quickly thrown together spaces in any random warehouse.
And it's also ignoring the water manufacturing process, and having the right supply chain to receive and handle these ultra clean discs without introducing lots of gunk into your space.
Yeah good point, the clean room aspect of it is vital - when you're fabricating at the nano scale, a single speck of dust is a giant boulder ruining your lithography.
Keep in mind that every wafer makes multiple trips around the fab, and on each trip it visits multiple machines. Broadly, one trip lays down one layer, and you may need 80-100 layers (although I guess DRAM will be fewer). Each layer must be aligned to nanometer precision with previous layers, otherwise the wafer is junk.
Then as others have said, once you finish the wafer, you still need to slice it, test the dies, and then package them.
Plus all the other stuff....
You'll need billions in investment, not millions - good luck!
Would you really need ASML machines to do DDR5 RAM? Honest question, but I figured there was competition for the non-bleeding edge - perhaps naively so.
As someone who knows next to nothing about this space, why can China not build their own machines? Is ASML the only company making those machines? If so, why? Is it a matter of patents, or is the knowledge required for this so specialized only they've built it up?
They can - if they are willing to invest a a lot of money over several years. The US got Nuclear bombs in a few years during WWII with this thinking, and China (or anyone else) could too. This problem might be harder than a bomb, but the point remains, all it takes is a willingness to invest.
Of course the problem is we don't see what would be missed by doing this investment. If you put extra people into solving this problem that means less people curing cancer or whatever. (China has a lot of people, but not unlimited)
Designing such an advanced lithography machine is probably a Manhattan Project scale task. China is indeed trying to make progress and they will probably get there someday but for now ASML is the only company in the world that knows how to build these or anything remotely close.
Right, the most famous example is Zeiss which is the only company in the world that builds the high-precision mirrors needed for the most advanced ASML machines. I’m not sure if those are subject to export bans too, but even if they are, I think China could eventually figure out how to build them. It’s just a matter of a huge amount of R&D that would need to be done first.
Yes. ASML is the only company making these machines. And both, they own thousands of patents and are also the only ones with the institutional knowledge required to build them anyway.
While I’ve seen plenty of swollen and deformed phone batteries, I’ve never personally seen one that has burned. Obviously it’s happened in the past with certain phone/battery models, but I’d imagine that it’s actually very rare now days?
On the other hand, I have seen cheap 18650s spontaneously start smoking even when they weren’t plugged in to anything…
The recalled aircraft include the latest A320neo model, some of which are basically brand new. Why would they be using flight computers from before 2002? Why is an old report from 2008, relating to a completely different aircraft type (A330), relevant to the A320 issue today?
> Why would they be using flight computers from before 2002?
Because getting a new one certified is extremely expensive. And designing an aircraft with a new type certificate is unpopular with the airlines. Since pilots are locked into a single type at a time, a mixed fleet is less efficient.
Having a pilot switch type is very expensive, in the 50-100k per pilot range. And it comes with operational restrictions, you can't pair a newly trained (on type) captain with a newly trained first officer, so you need to manage all of this.
I think you're confusing a type certificate (certifying the airworthiness of the aircraft type) with a type rating, which certifies the pilot is qualified to operate that type.
Significant internal hardware changes might indeed require re-certification, but it generally wouldn't mean that pilots need to re-qualify or get a new type rating.
No I meant designing a new aircraft with a new type certificate instead of creating the A320neo generation on the same type certificate. The parent comment wondered why Airbus would keep the old computers around, I tried to explain why they keep a lot of things the same and only incrementally add variants. Adding a variant allows them to be flown with the same type rating or with only differences training (that's what EASA calls it, not sure about the US term) which is much less costly.
Asking from ignorance: shouldn't the computer design be an implementation detail to the captain, while the interface used by who pilots stays the same for that type of airplane? I understand physical changes in the design need a retraining but the computer?
Ideally you would not change the computer at all so your type certificate doesn't change. If you have to (or for commercial reasons really want to) make a change you would try very hard to keep that the same type certificate or at most a variant of the same type certificate. If you can do that then it will be flown with the same type rating and you avoid all the crew training cost issues.
But to do that you'll still have to prove that the changes don't change any of the aircraft characteristics. And that's not just the normal handling but also any failure modes. Which is an expensive thing to do, so Airbus would normally not do this unless there is a strong reason to do it.
The crew is also trained on a lot of knowledge about the systems behind the interface, so they can figure out what might be wrong in case of problems. That doesn't include the software architecture itself but it does include a lot of information on how redundancy between the systems work and what happens in case one system output is invalid. For example how the fail over logic works in case of a flight control computer failure, or how it responds to loosing certain inputs. And how that affects automation capabilities, like: no autoland when X fails, no autopilot and degradation to alternate contol law when Y fails, further degradation if X and Z fail at the same time. Sometimes also per "side", not all computers are connected to all sensors.
The computer change can't change any of that without requiring retraining.
1. I don't think adding robustness necessarily requires changing how systems are presented to the flight crew.
2. Bigger changes than this are made all the time under the same type certificate. Many planes went from steam gauges to glass cockpits. A320 added a new fuel tank with transfer valves and transfer logic and new failure modes, and has completely changed control law over the type. etc.
Since the new versions of the same ADIRU have EDAC, they have been using it on planes since 2002 and they have been putting the EDAC variant in whenever an old one was being returned for repairs, I don't think this is the reason. I think the reason is that they had 3 ADIRU's and even if one got wonky, the algorithm on the ELAC flight computer would have to take the correct decision. It did not take the correct decision. The ELAC is the one being updated in this case.
> Why would they be using flight computers from before 2002?
Why would you assume they're not? I don't know about aircraft specifically, but there's plenty of hardware that uses components older than that. Microchip still makes 8051 clones 45 years after the 8051 was released.
From a pure safety point of view, it's easier to deal with older, but well-understood products, only updating them if it's an actual safety issue. The alternative is having to deal with many generations of tech, as well as permutations with other components, that could get infinitely complicated. On top of that, it's extremely time consuming and expensive to certify new components.
There's a reason the airlines and manufacturers hem and haw about new models until the economics overwhelmingly make it worthwhile, and even then it can still be a shitshow. The MCAS issue is case in point of how introducing new tech can cause unexpected issues (made worse by Boeing's internal culture).
The 787 dreamliner is also a good example of how hard it is. By all accounts is a success, but it had some serious teething problems and still has some concerns about the long term wear and tear of the composite materials (though a lot of it's problems wasn't necessarily the application of new tech, but Boeing's simultaneous desire to overcomplicate the manufacturing pipeline via outsourcing and spreading out manufacturing).
The issue detailed in the linked report details why the spike happened in the first place on the ADIRU (produced by Northrop Gruman). The recalled controller is the ELAC that comes from Thales. The problem chain was that despite the ADIRU spiking up, the ELAC should not have taken the reactions it took. So they are fixing it in the ELAC.
For the same reasons that Honeywell is building new devices with AMD 29050 CPUs today[1] - by sticking with the same CPU ISA, they can avoid recertifying portion of the software stack.
[1] Honeywell actually bought full license et al from AMD and operates a fabless team that ensures they have stock and if necessary updates the chip.
Because the problem isn't just this. It's that the flight controller did not properly decide what to do when the data spiked because of this issue as well.
> “Hydro-energy exist, but it's fairly built out so stable non-fossil power needs to be nuclear, or wind/sun + storage”
Interconnectors also exist (and more are planned), which means, for example, that Norway can buy wind energy from the UK when it’s cheap and abundant, in preference to using stored energy from their hydro lakes.
That way they effectively get more out of existing hydro lakes, which in Norway is already a very significant storage capacity.
Theres not going to be built any more interconnectors from Norway anytime soon.
Electricity became a lot more expensive in Norway after building several interconnectors to UK and mainland Europe. Importing high prices from the failed energy politics of UK and Germany which both have among the most expensive electricity in the world.
This has been a huge debate, and the general concensus seems to be that joining ACER and building inrerconnectors to mainland Europe was a big mistake.
About 90% of Norway's 40 GW energy production (mostly hydro) is state owned. By exporting energy and thereby getting other countries to pay, the money literally goes to the norwegian people. Not directly into their bank accounts, but into their govt budgets, which they later pay less in taxes.
Norwegian power generation is sized for the domestic market, so tax income from selling excess is marginal at best. The power bills however have indeed crept quite a way up. This was especially noticeable in the first winter of the Russian invasion, when the Nordics had to subsidize the bill that suddenly dropped on short-sighted German energy policy.
Germany benefits a lot from the open market. If only countries introduced a rule to export only the excess of the energy then Germany would be cooked, because prices would sky rocket for them, not 2x, 3x, but way more. Luckily for them they can make strategical mistakes and go away with it making others to pay for that.
"Excess energy" is not a static value. It dynamically depends on price point. Which depends on demand and supply which both depend on price. That dynamic (and circular) interplay is at the core of why economics as a discipline exists in the first place.
Example, the Netherlands had the biggest gas reserve in forever. It's 2/3rds or 3/4ths empty now and extraction has or is stopping due to it causing earthquakes. But the income from exporting the excess gas has been used for socialist policies. Now that that income is gone, and now that expenses for gas have gone way up (also due to reliance on cheap Russian gas), people are feeling it in their bank accounts directly and the socialist policies are being dismantled one by one.
Only if you compare apples with oranges. Scandinavian taxes are considered high, but they include things like child care, health care, public transportation infrastructure. For people participating in all of those services, the take home pay (as percentage of gross income) ends up not less than in presumably low tax countries.
Do they actually pay less in taxes because of this? I’m not arguing. That is great and I would appreciate if you could provide a source for me to read.
We do not but there's a social consensus about the value people get from this taxation level. However the excess power price which is not a domestic supply/demand outcome is a lot harder to sell.
There are government subsidies for consumers to either have a fixed price or a price cap on electricity in Norway as a political response to the increase. This would be harder to pull off if not most of the profits from export didn’t land in the public sector (either taxes or government owned energy companies).
Electricity prices don't go up because you have access to expensive power, it goes up because you don't have enough cheap power so you have to buy the expensive power.
It seems like Norway just wouldn't have power if they weren't connected to other sources, not that they'd have more cheap power.
Electricity prices go up when you have access to customers who are willing to pay more. If grid connections to other regions are limited, people in regions with a lot of cheap generation (such as Norway) pay low prices. But if you add grid connections without increasing generation capacity, prices start equalizing between regions, as every power company tries to sell to the highest bidder.
Norway could power itself fully with domestic hydro. But it chose not to, as the power companies make more money by importing foreign power when it's cheap and exporting hydro when it's not.
Washington state has the same problem to a lesser degree. California pays more for cheap Washington hydro, which causes the costs to go up for us, although I guess not as drastic as Norway since our electricity is still considered cheap.
> "Norway could power itself fully with domestic hydro."
We have events where the we cannot get enough load from domestic production. Typically in winter when water freezes.
> It seems like Norway just wouldn't have power if they weren't connected to other sources, not that they'd have more cheap power.
This is not the case as Norway and neighbouring Sweden have plentiful hydro. It's especially valuable as it can be regulated to complement wind/solar fluctuations, essentially replacing storage.
Obviously the presumably large amount of money spent to interconnect could have been spent adding local production and storage. It would be a waste of money if there was a reasonable path to local energy independence that was neglected.
A significant proportion of Norway's domestic energy production is hydro, which comes with it's own "built in" storage up to the capacity of the dams, so Norway already has a very significant storage capacity.
Estimates suggests a storage capacity of 87TWh of storage in hydro reservoirs, compared to production capacity in recent years between 146 and 157 TWh, and a theoretical production capacity of ~309TWh (I don't know the basis for that - I'd imagine peak production at all the plants, but I doubt that could ever happen in reality, so the 146-157TWh based on real production are better...)
Compare that to Norwegian electricity consumption of 124 TWh in 2020.
Of course, since so much of Norways total electricity production is hydro, large storage is necessary, as the hydro supply is highly seasonal, based on e.g. things like the amount of melting snow in the mountains in spring.
they have too much cheap power, so they decided to sell it. But the fact they have a buyer that buys for more than locals, means they do not longer have to sell to locals at low price.
Tho it being state owned make it weird, you'd think state would keep lower rates for the people
It’s basic supply and demand. And by linking to other grids, you increase demand since there’s now more customers for your supply. What they have (comparatively) less is supply since the supply in those markets is shite in comparison to what Sweden and Norway have for their local demand.
Prices went up in norway because the uk had even higher prices than norway.
Having these interconnections is good for producers in norway and consumers in Uk, but very bad for consumers in Norway
> Importing high prices from the failed energy politics of UK
Remember that its a market, not the consumer price.
The spot price for UK electricity is still quite competitive in the winter, just not reliable.
The other thing to note is that peak in the UK is different to peak further up in longitude, which means that there is benefit to both countries for this.
> Importing high prices from the failed energy politics of UK and Germany which both have among the most expensive electricity in the world.
Look at it the other way: producers of electricity previously could only sell for cheap at home, and now they can export and make more money. That's good!
Even Southern England cannot get enough wind energy from Scotland to fully utilise wind farms because transmission capacity is insufficient. I would imagine a transmission line to Norway will be even more expensive than to England.
But they are building such a link, because it'll make/save more money than it costs.
Imagine how many doom and gloom headlines we'd have avoided if these two massive construction projects could have been sync'd up perfectly or if we had a national press that could do anything other than try to scare people with big numbers.
This must have been around the same time (1993 or so) when many organisations were upgrading old coax 10Base2 network equipment to modern 10BaseT (and eventually 100BaseT). My friends and I, strongly motivated by the incentive of being able to play multiplayer DOOM, managed to source some free ISA 10Base2 Ethernet cards and coax cable and T-connectors from someone's Dad. The only thing we were missing was the terminators which could be made yourself by cutting a coax cable and soldering a resistor between the conductors... fun introduction to LAN technology for us!
The average strike price for offshore wind in AR6 came in at £59.90/MWh. That's pretty cheap, and much cheaper than any new nuclear. Hinkley Point C's strike price is £92.50/MWh. (note: strike prices are always quoted based on 2012 currency, and get adjusted for inflation)
You can't really compare strike prices to spot prices on the wholesale market precisely because there's so much supply under CfD contracts, which distorts the wholesale market. When supply is abundant, the wholesale price plummets and even goes negative, yet suppliers still want to generate because they get the CfD price. When supply is constrained (eg: cold snaps in winter with little wind), the spot price can surge to £1000/MWh.
In 2024 money offshore was £102 offshore, onshore £89. AR7 estimates are >10% higher. Those prices were not high enough for Hornsea 4, who cancelled the contract (with a big write down for the entire project) after being awarded it.
Yes, like I said, UK CfD strike prices (both nuclear and wind) are always quoted in 2012 prices.
But even adjusting for inflation, offshore wind's £59.90 is a fraction of the retail price that UK consumers and most businesses pay for electricity. There's plenty of margin left for the middlemen (regulator, grid operator, distribution network operator, electricity retailer, etc).
... and Hinkley Point C's £92.50 is £133.79 today, and could be £160+ by the time it actually starts generating in (maybe?) 2031.
> ”AWS has so many services at this point and it feels like so many of them overlap too.”
Yep. I’ve also always found it frustrating how so many of them have names like “Snowball”, “Kenesis”, “Beanstalk”, “Fargate”, “Aurora”, etc, which don’t give you any real clue what they do.
Much of the French high speed rail network (LGV Atlantique,
LGV Rhône-Alpes, LGV Nord,
LGV Méditerranée,
LGV Est, etc) was inaugurated in the 1990s and 2000s.
This alone would have been responsible for a big increase in French rail traffic.
UK rail infrastructure certainly received improvements during this era too, but besides HS1 it was mostly just renewing and upgrading existing tracks. Nothing like the thousands of km of new high speed rail that was built in France!
If you just look at France it may be tempted to conclude it must be TGV, but that's the same methodological problem: it grew in most of Europe by comparable amount in the period.
Also TGV is nice, but it doesn't explain why TER and Intercité (the two slower-speed regional networks) also experienced an influx of passengers during the period.
It's not TGV, it's not privatization and it is not Wiedervereinigung, it's a broader trend.
> "Also TGV is nice, but it doesn't explain why TER and Intercité (the two slower-speed regional networks) also experienced an influx of passengers during the period."
I don't entirely disagree with you, but rail lines don't function in isolation. If you introduce a new high speed main line, there is a network effect: connecting services will also see a boost in traffic from people travelling on other lines to reach the new one.
Of course, it takes quite some time for a fab to go from an idea to mass production. Even in China. Expect prices to drop 2-3 years from now when all the new capacity comes online?
reply