Hacker Newsnew | past | comments | ask | show | jobs | submit | more datadeft's commentslogin

Is this suffering from the same problems like Redis when trying to horizontally scale?


I guess yes.


The biggest problem what I have with using AI for software engineering is that it is absolutely amazing for generating the skeleton of your code, boilerplate really and it sucks for anything creative. I have tried to use the reasoning models as well but all of them give you subpar solutions when it comes to handling a creative challenge.

For example: what would be the best strategy to download 1000s of URLs using async in Rust. It gives you ok solutions but the final solution came from the Rust forum (the answer was written 1 year ago) which I assume made its way into the model.

There is also the verbosity problem. Calude without the concise flag on generates roughly 10x the required amount of code to solve a problem.

Maybe I am prompting incorrectly and somehow I could get the right answers from these models but at this stage I use these as a boilerplate generator and the actual creative problem solving remains on the human side.


Personally I've found that you need to define the strategy yourself, or in a separate prompt, and then use a chain-of-thought approach to get to a good solution. Using the example you gave:

  Hey Chat,
  Write me some basic rust code to download a url. I'd like  to pass the url as an string argument to the file
Then test it and expand:

  Hey Chat,
  I'd like to pass a list of urls to this script and fetch them one by one. Can you update the code to accept a list of urls from a file?

Test and expand, and offer some words of encouragement:

  Great work chat, you're really in the zone today!

  The downloads are taking a bit too long, can you change the code so the downloads are asynchronous. Use the native/library/some-other-pattern for the async parts.

Test and expand...


Whew, that's a lot to type out and you have to provide words of encouragement? Wouldn't it make more sense to do a simple search engine query for a HTTP library then write some code yourself and provide that for context when doing more complicated things like async?

I really fail to see the usefulness in typing out long winded prompts then waiting for information to stream in. And repeat...


A few options.

1. Use TTS and have an LLM clean it up.

2. Use a collection of prompt templates.


I meant VTT, not TTS


I'm going the exact opposite way. I provide all important details in the prompt and when I see that the LLM understood something wrong, I start over and add the needed information to the prompt. So the LLM either gets it on the first prompt, or I write the code myself. When I get the "Yes, you are right ..." or "now I see..." crap, I throw everything away, because I know that the LLM will only find shit "solutions".


This is actually a great approach. Essentially you're using time travel to prevent misunderstandings, which prevents the context from getting clogged up with garbage.


This is the best approach and avoids long context windows that get the LLM confused


I have heard a few times that "being nice" to LLMs sometimes improves their output quality. I find this hard to believe, but happy to hear your experience.

Examples include things like, referring to LLM nicely ("my dear"), saying "please" and asking nicely, or thanking.

Do these actually work?


Well consider it's training data. I could easily see questions on sites like stack overflow having better quality answers when the original question is asked nicely. I'm not sure if it's a real effect or not but I could see how it could be. A rudely asked question will have a lot of flame war responses.


I'm not sure encouragment itself is the performance enhancer, it's more that you're communicating that the model has the right "vibe" of what your end goal is.


I use to do the "hey chat" all the time out of habit and when I thought the language model was something more like AI in a movie than what it is. I am sure it makes no difference beyond the user acting different and possibly asking better questions if they think they are talking to a person. Now for me, it looks completely ridiculous.


I find it really bad for bootstrapping projects such as picking dependencies from rapidly evolving ecosystems or understanding the more esoteric constraints like sqlite's concurrency model.

I'd argue you need to bootstrap and configure your project then allow only narrow access and problems to the llm to write code for - individual functions where your prompt includes the signature, individual tests, etc. Anything else and you really need to invest time in the code review lest they re-configure some of your code in a drastic way.

LLMs are useful but they do not replace procedure.


I agree completely with all you said however Claude solved a problem I had recently in a pretty surprising way.

So I’m not very experienced with Docker and can just about make a Docker Compose file.

I wanted to setup cron as a container in order to run something on a volume shared with another container.

I googled “docker compose cron” and must have found a dozen cron images. I set one up and it worked great on X86 and then failed on ARM because the image didn’t have an ARM build. This is a recurring theme with Docker and ARM but not relevant here I guess.

Anyway, after going through those dozen or so images all of which don’t work on ARM I gave up and sent the Compose file to Claude and asked it to suggest something.

It suggested simply use the alpine base image and add an entry to its crontab, and it works perfectly fine.

This may well be a skill issue but it had never occurred to me to me that cron is still available like that.

Three pages of Google results and not a single result anywhere suggesting I should just do it that way.

Of course this is also partly because Google search is mostly shit these days.


Maybe you would have figured it out if you thought a bit more deeply about what you wanted to achieve.

You want to schedule things. What is the basic tool we use to schedule on Linux? Cron. Do you need to install it separately? No, it usually comes with most Linux images. What is your container, functionally speaking? A working Linux system. So you can run scripts on it. Lot of these scripts run binaries that come with Linux. Is there a cron binary available? Try using that.

Of course, hindsight is 20/20 but breaking objectives down to their basic core can be helpful.


With respect, the core issue here is you lacked a basic understanding of Linux, and this is precisely the problem that many people — including myself – have with LLMs. They are powerful and useful tools, but if you don’t understand the fundamentals of what you’re trying to accomplish, you’re not going to have any idea if you’re going about that task in the correct manner, let alone an optimal one.


Honestly we are headed towards a disturbing height of inefficiency in software. Look at software today, 1000x less efficient than what we had in the 90s. Do businesses care? No, they focus on value. The average user is too stupid to care, even though all their RAM is being sucked up and their computer feels like shit.

The only thing that's keeping us from that hell is the "correct" part. The code is not going to be properly tested or consistent, making it impractical for anything substantial right now.


For Claude, set up a custom prompt which should have whatever you want + this:

"IMPORTANT: Do not overkill. Do not get distracted. Stay focused on the objective."


As I understand 'reasoning' is a very misleading term. As far as I can tell, AI reasoning is a step to evaluate the chosen probabilities. So maybe you will get less hallucinations but it still doesn't make AI smart.


Yeah, "reasoning" just tells the AI to take an extra planning step.

In my experience, before "reasoning" became an option, if you ask it a question that takes a decent amount of thinking to solve, but also tell the model "Just give me the answer", you're FAR more likely to get an incorrect answer.

So "reasoning" just tells the model to first come up with a plan to solve a problem before actually solving it. It generates its own context for coming up with a more complete solution.

"Planning" would be a more accurate term for what LLMs are doing.


What I also notice is that the very easily get stuck on a specific approach to solving a problem. One prompt that has been amazing for this is this:

> Act as if you're and outside observer to this chat so far.

This really helps in a lot of these cases.


Like, dropping this in the middle of the conversation to force the model out of a "local minimum"? Or restarting the chat with that prompt? I'm curious how you use it to make it more effective.


Yeah exactly forcing it out of a "local minimum" is a neat way to describe it. In the middle of the conversation I drop this sometimes. Works wonders. You just have to tell it it's stuck in a loop and it will suddenly pretend (?) to be self aware.


That’s a cool tip; I usually just give up and start a new chat.


I find them very good for debugging also


The amount of issues I have seen in the last ~20 years caused by "off by one" type of errors in Excel is insane.

Few examples:

- incorrect schedule for the electricity grid for an entire country

- incorrect assessment of an airport use for an airline (causing few millions USD loss in revenue)

- incorrect financial position assessment for a mine (resulting incorrect deciosion to optimize the wrong business process, not sure about how much they lost)

Making illegal states unrepresentable is a concept that benefits programming langues and business processes alike.


You’re only looking at one side of it. Consider how much value that excel adds, and how many use cases it enables. It’s incalculable. It’s always easy to poke at excel’s issues, but errors are almost always a skill issue.

From a financial perspective, there are many ways you could enforce checks to ensure the model is balanced - it just takes time. Data entry can be an issue, but you can automate that too. The deloitte report is saying the health department should benefit from adopting a gigantic erp system but you could get 90% of the benefit by employing a couple people that really know what they’re doing.

You could say that excel should allow those things to happen but the flexibility is precisely what makes it so valuable.

The people writing the reports are consultants. Consultants recommend things that benefit consultants. In their case, a multi year process and tens of millions of dollars trying to install ERP software is more a windfall for the vendors and not the companies.


Absolutely, kudos to them for following KISS, despite the legions of people who have probably told them they’re doing it wrong.


Yep. I was going to say the same thing. Installing some $10M+ software costs the company more than just the install and ongoing licensing costs. You have to have an entire new IT team to manage it and staff to understand and so on. There are probably just as many chances of having a big issue.


That assumes that Better Excel would somehow not look like Excel and would lose its benefits of being easy. My assumption is that Better Excel will still look like Excel, in the same way that Better iOS will also look like iOS.


> but errors are almost always a skill issue

Sure but to compare it to tech, this is why we build tooling like linters and typed languages so that we don’t lose millions of dollars because we needed the average developer to implement a feature. I imagine in heath care it’s even more skewed because a SW eng can see the compiler output and make changes but the person entering the data isn’t (for example) the nurse that’s using the needles to be accounted for


There are indeed tools like this available for Excel. And there are also whole practices within the big accountancy firms, and independent firms which focus on providing "audit" of Excel files like this, which are used commonly where there are financial firms involved in a project with big money at stake (lenders, financial investors, etc). But in my experience these tools and services aren't often used for the spreadsheets which are used for day to day corporate-type management (I would count government as part of that).


Say nothing of many crisis were avoided because somebody was able to pull up a local excel file despite a central service being down or go through their historical files and sniff out a discrepancy. Workflows like that are often impossible or require coordinating with a vendor in another time zone or a bunch of slow cross team communication and ticket submissions with the sort of minimum effort "we'll run a DB in the cloud and slap a JS front end built with a bunch of questionable but flashy libraries and ignore all the edge cases because I'll be working somewhere else in 2yr" solutions that the median software developer hive mind will implement if left to its own devices.


Do you have evidence that _any_ other software would not be prone to this kind of errors, taking into account the fact that at the scale of dozens of billions of dollars, the numbers you mentioned seem like rounding errors (except for the first one which would deserve a reference)?


Don't forget Reinhart-Rogoff, which ties into the other thread on academic fraud/nonreproduction, and arguably did huge amounts of economic damage: https://www.bbc.co.uk/news/magazine-22223190


IMO unit tests are roughly the software equivalent of double-entry bookkeeping.


This is a very good analogy. I will steal it.


>The amount of issues I have seen in the last ~20 years caused by "off by one" type of errors in Excel is insane.

Generally those aren't a fault of excel, but a fault of someone doing something dumb in excel. People do dumb stuff in every problem and it's nearly impossible to prevent it.


Imagine how many errors occurred before we had digital spreadsheet technology.

That said, I’ve always thought there was a product that sits somewhere in between Excel and full blown custom software that provides some of the controls we need while still being and build able by someone with low/average technical skills


FileMaker seems to cover this (admittedly my interactions with it was in high school, which was last millennium), it's a database app but there's a lot of "make your own UI" parts in it, so you can create custom UIs (and wizards) for your use cases.

I suppose MS Access offers this too, although FM feels more user-friendly, MS Access felt like Internet Explorer 4, where an error dialog would pop-up for every little JavaScript error (disclaimer: this opinion is from 25 years ago).


No you're right on the money in my opinion. I have very fond memories of spinning up apps in MS Access that were quick and dirty but extremely powerful, and able to be maintained/modified by folks that were not professionally trained software engineers. I think there's a missing area in the market for something exactly like that again.


The hive mind really hated (hates?) MS Access and I don't know why.

I built a mini ERP for my father's company when I was 16. Started out as an excel spreadsheet, but then added inventory, accounting, and printing contracts and reports.

He kept using it until retirement and was very happy with it. I could learn & apply SQL. Win-win.


I've done that over the last several years, building an order-tracking/accounting system for myself in Excel. (Inventory and fulfillment is handled by commercial software.) I'm mulling over whether to move to a "real" database.

I'm on a Mac so can't move to Access. I've thought about FileMaker, but am considering Panorama X because unlike FileMaker it reportedly allows undoing almost anything, while FileMaker is like the typical database in a record commit not being undoable.

(Yes, I know that not allowing such is good practice, which is why I am not a database admin.)

I've heard good things about Panorama X, and it has a spreadsheet-like UI. However, I've used Excel enough to know that I haven't tapped more than a small fraction of its ability, especially things like Power Query. As much as I loathe VBA, what if the cost of moving to a "real" database isn't the up-front cost or conversion time, but the longer-term inflexibility of Panorama (and, pretty much, anything else in my price range) compared to the beast that is Excel?


These days I default to Google Sheets for almost anything.

App Script is quite powerful and easier to write and understand than VBA imho.


Filemaker is great. We created a monster at our University to keep track of Research Grants and it worked for almost 20 years flawlessly.

Untill, a consultant replaced it with an 'enterprise' solution that cost 10000x more to run and maintain.

Sometimes, well designed systems, however simple they might be, would be all that is required.


I've not used it, but I thought Airtable occupied this spot. It seems to have a few open-source clones if you want to run it locally. I don't know if they are any good.


Microsoft also has Power BI which occupies this "database which looks like a spreadsheet" space. But it's not well known.


Apple today announced M3 Ultra, the highest-performing chip it has ever created

I thought it was few weeks ago when M4 Max came by.


It is one of the tools I use as well and I pay for it. It makes life so much easier. At work we have to test a lot of country dependent settings and with TS and Mullvad is is very simple. I can also access my home network easily.


> I can also access my home network easily.

expand plz


So I have this small form box (it is hardly bigger then few mobile phones stacked on each other) and I run Tailscale on it as well (also as an exit node). I installed Tailscale on my MBP and mobile phones as well. Now it is forming a network where regardless of where I am in the world I can chose my home server as an exit node and also access the samba shares and the other devices (rpi3 and rpi4).


Imagine a network of tailscale droplets that allow for agents to exit via a tailscale node in their locale and do work - then funnel data back to master wherever...

(Like If I wanted to crawl an area in germany, but needed an exit tracert that originated there - a tailscale droplet that could be connected to, perform [object] and openly pipe data back to master?


The evolution theory of origin of life just got a kick in the wrong spot?

It is kind of weird how defensive is that crew. It is very much possible that life on Earth has extraterrestrial origins. I do not see why somebody would try to discard this idea.


Evolution is a description of what we see happening in animals over time. That's all. This is about the formation of life.


"Historically, ideas on the origins of life have been mingled with evolutionary explanations. Darwin avoided discussing the origin of the very first species in public although he acknowledged the possibility that life originated by natural causes. Some of his followers adopted this materialistic position and advocated some sort of spontaneous generation in the distant past. Nevertheless, Pasteur’s experiments were a major obstacle for scientific acceptance of the sudden emergence of life. The scientific study of the origin of life, established in the 1920s, required abandoning the idea of a unique chance event and considering a view of life emerging as the result of a long evolutionary process."

https://evolution-outreach.biomedcentral.com/articles/10.100...


That's nice but there's still no reason to talk about things that reproduce and the origins of the building blocks of things that reproduce using the same word. Might as well say both the grocery store and the celery you bought are both stew. Or that shopping is cooking.


I think “how did non-living matter become alive” and “how do populations of organisms change over time” are two very different questions.


I don't think you understand what "very much possible" means. It isn't a synonym for "I very much believe."

We have actual evidence of evolution as a real and active process and can (and have) studied and mapped that process across species and across time - including in humans - and we find absolutely no evidence of nor the necessity for extraterrestrial influence anywhere.

And even if some flavor of is assumed true for the sake of argument, that still wouldn't somehow negate evolution. It's entirely possible for life to have an extraterrestrial origin and to have evolved on Earth after that origin, having first evolved somewhere else.


I am not saying it would negate evolution. I am saying origins of life could be extraterrestial and then life goes on a evolutionary process.


This finding invalidates the idea that Bennu was seeded with molecules by biological life: biological life would have created chiral molecules, but the mix on Bennu is racemic, which suggests that the molecules were created by run of the mill geological processes and simply wouldn't require extraterrestrial seeding.

Tldr the finding is that abiogenesis may be easier to get to than previously thought.


> Tldr the finding is that abiogenesis may be easier to get to than previously thought.

It doesn't really mean that, though. We already knew synthesis of simple amino acids was pretty easy. Urey-Miller did that decades ago. Making the easy part of a multistep process easier doesn't make the whole process much easier.

IMO, the rate-limiting step for OoL is later, when by some means the enormous complexity barrier is reached between abiotic stuff like this and the simplest self-replicating system capable of evolution. Of the latter, the simplest we know (the most stripped down cell) still contains billions of atoms.


The Urey-Miller experiments did it, but not without investigator interference to separate the product from the toxic byproduct. The experiments have aged poorly and offer no solutions to abiogenesis.


How separated were these asteroidal amino acids from toxic byproducts?


I wouldn't discard the idea, but OTOH, life -had to happen somewhere first-. That first clearly happened, and without extraterrestrial seeding. So that life either evolved, or ... because of the chemical makeup of the universe ... life is inevitable given the necessary conditions. This latter is a simpler and sufficient model.


Uh, no? And I think you’re confusing abiogenesis with the process of evolution - they are different processes.


> An electric bike or scooter can be brought indoors to charge.

Just look at this:

https://www.newsflare.com/video/704902/e-bike-battery-explod...

There are a lot of videos like this that shows the devastation of charging electric bikes or scooters indoor.


It kind of weird to have employees without an office. How could that happen?


People that work "in the field" don't need a home office. I know a hydrologist who now has to go to the office, when much of his work is travelling around sampling groundwater. Another person I know works at OSHA, where she also travels to various work sites, but now has to go into the home office. Ridiculous waste of travel to have to go the the home office only to then leave to go somewhere else.


Everyone did this during COVID. Either got rid of office space or didn't add any to keep up with demand, because N% of staff was now remote.

I saw this at Google before I left in 2021. Doesn't surprise me that gov't has the same problem. Desks that could only be reserved/booked, not owned. Insufficient desks if everyone had to come back. They clearly didn't see WFH as temporary, even though RTO was clearly the long term plan.

Other bigcorps are the same from what I hear. Facilities got all messed up.


That’s how remote works. People don’t have a physical office because they work from home or another location.


> That’s how remote works

DOGE is “targeting remote-work arrangements” [1].

[1] https://www.wsj.com/opinion/dont-let-doge-kill-remote-work-h...


Yes, hence the article.


with WFH it doesn't seem very weird to me? why have the central office if the work is getting done anyway


I WFH for 20 years but never had a job where we did not have an office.


The US-government is interspersed with oligarchs who want to diminish the ability of the government to actually act so that they can argue why everything should be privatized.


Most people do not understand this and think that we can reall talk about nuclear vs solar when we really need to talk about an energy mix where you pick one point (for example you pick nuclear or solar) and the rest depends on this choice.

Than you can calculate the price.


To add, it's worth reading the Levelized Full System Costs of Electricity paper: https://iaee2021online.org/download/contribution/fullpaper/1...

Energy mix is key: the cost of 100% dependency on intermittent renewables is extremely high.

Going for 95% dependency on intermittent renewables with the remainder being filled in by low-cost dispatchable generation halves system costs (see table 6, pg. 21).


So you've managed to cherry pick the one study showing nuclear power in any kind of possible light. Typical.

You do know that the study is only applicable to running your off-grid cabin from a sole source and battery storage based on 2020 costs. The study also assumes 100% uptime for nuclear power.

It does not deal with demand shifts, it does not deal with transmission, it does not deal with backup power.

It also managed to finds a nuclear LFSCOE of $106/MWh. Even though it doesn't adapt to peaks or breakdowns when Hinkley Point C sits at $170/MWh when running at full tilt for 35 years.

Whenever we do quality research on the subject the results end up being that nuclear power is horrifically expensive.

See for example:

See the recent study on Denmark which found that nuclear power needs to come down 85% in cost to be competitive with renewables when looking into total system costs for a fully decarbonized grid, due to both options requiring flexibility to meet the grid load.

> Focusing on the case of Denmark, this article investigates a future fully sector-coupled energy system in a carbon-neutral society and compares the operation and costs of renewables and nuclear-based energy systems.

> The study finds that investments in flexibility in the electricity supply are needed in both systems due to the constant production pattern of nuclear and the variability of renewable energy sources.

> However, the scenario with high nuclear implementation is 1.2 billion EUR more expensive annually compared to a scenario only based on renewables, with all systems completely balancing supply and demand across all energy sectors in every hour.

> For nuclear power to be cost competitive with renewables an investment cost of 1.55 MEUR/MW must be achieved, which is substantially below any cost projection for nuclear power.

https://www.sciencedirect.com/science/article/pii/S030626192...

Or the same for Australia if you went a more sunny locale finding that renewables ends up with a grid costing less than half of "best case nth of a kind nuclear power":

https://www.csiro.au/-/media/Energy/GenCost/GenCost2024-25Co...


I think you misinterpreted my comment. I'm advocating for an energy mix where the majority of energy is supplied by intermittent renewables with a small amount of low-cost (i.e. not nuclear) dispatchable generation. This avoids the extortionate "last mile" of costs when utilising 100% intermittent renewables.

The Clean Power 2030 report from NESO came to a similar conclusion. See Insights from our clean power sensitivities, pg 53: https://www.neso.energy/document/346651/download


95% renewables would be incredible, I’ll take it :)


Thanks a lot, this is a must read indeed.


The major point here is that nuclear is controllable energy type while solar is not. So comparing only the price is apples to oranges comparison. Most human energy consumers need energy with a fixed rate and all physical metrics withing a tight margin. To prouduce that with only solar energy is impossible.

This means you have to build other energy sources into the grid like gas turbines to be able to control the grid. So if you really want to compare energy prices than you have to look into the TCO.

https://www.reddit.com/r/energy/comments/11q58pe/price_trend...


Solar + batteries is still cheaper than nuclear


In what timeframe?

Solar panels last 20-25 years. Nuclear power plants last for 50+ years and use fraction of the space that solar. It is hard to believe that the TCO is lower. Usually people just looking at the price in the short term and comparing that. Batteries are a whole different can of worms. Super toxic and you need a high volume of those because the energy density is much lower.

https://energyeducation.ca/encyclopedia/Energy_density


In the timeframe of the duration of the installation. (Total cost for the whole project + total costs for fuels & maintainance) / (kW generated * lifetime of the project)


Again, comparing apples to oranges.

If 50 years we need to build a single nuclear plant while you have to build twice the solar capacity I doubt that solar come out cheaper.


It literally does come out cheaper. It’s referred to as the LCOE - you can look at the data yourself


Space isn't really an issue. There's many places that are no good for building but are great for solar, like mountainsides.

Besides, if the Netherlands can have solar then Italy can too. It's much less densely populated.

And nuclear lasts 50+ years with constant maintenance, which is really expensive.


> no good for building but are great for solar, like mountainsides.

And what's the cost of building on a mountainside, and how much is maintenance?

> Besides, if the Netherlands can have solar then Italy can too

How much solar are they building in comparison to other sources?

> And nuclear lasts 50+ years with constant maintenance, which is really expensive.

Unlike solar panels built on mountainsides which are not suitable for other types of buildings?


Maintaining solar panels will be always way, way, waaay cheaper, than maintaining a nuclear reactor and disposing safely the waste.

Batteries would need to be cheaper, that is all that is needed for italy. In the south, they have sun all year around.


> Maintaining solar panels will be always way, way, waaay cheaper,

Citization needed. Actually we do not know and we are figuring this out.

Few examples:

- https://www.youtube.com/watch?v=wpJKM65tsCo

- https://www.msn.com/en-us/video/weather/worlds-largest-float...

Batteries would need to be safer, less toxic and less prone to be mined by children in Africa, also orders of magnitude more energy dense.

https://www.npr.org/sections/goatsandsoda/2023/02/01/1152893...


See how you decided to completely ignore my question. But I like that on top of that, batteries are not cheap either


See how I am still free to ignore any question you asked and answer just what I want?

Which would be: there is not really a need for stationary batteries to be expensive.


> See how I am still free to ignore any question you asked and answer just what I want?

It's a very common tactic by renewables maximalists, and I'm very familiar with it


It is a common trait of internet commentators, who are not paid for your education.

"And what's the cost of building on a mountainside, and how much is maintenance?"

Why should I feel it as my duty to answer that specific question to you? Seriously curious.


Because context matters, and the context of the comment you replied to was literally "oh, we should just build solar panels on mountainsides which are not good for other types of building": https://news.ycombinator.com/item?id=43254135

So the conversation, in context, is literally this:

---

Someone: we should build panels on mountainsides

Me: how much more expensive will building and maintaining them will be?

You: Maintaining solar panels will be always way, way, waaay cheaper than maintaining nuclear. Also batteries will need to become cheaper

Me: Erm... That doesn't answer my question, and on top of that you're admitting batteries are not cheap either

---

But, again, this is on par with what I expect in such discussions


The lifetime difference is a standard talking point that sounds good if you don't understand economics but doesn't make a significant difference. It's the latest attempt to avoid having to acknowledge the completely bizarre costs of new nuclear built power through bad math.

CSIRO with GenCost included it in this year's report.

Because capital loses so much value over 80 years ("60 years + construction time) the only people who refer to the potential lifespan are people who don't understand economics. In this, we of course forget that the average nuclear power plant was in operation for 26 years before it closed.

Table 2.1:

https://www.csiro.au/-/media/Energy/GenCost/GenCost2024-25Co...

The difference a completely absurd lifespan makes is a 10% cost reduction. When each plant requires tens of billions in subsidies a 10% cost reduction is still... tens of billions in subsidies.


and in 0-10 years solar make infinitely more power than a nuclear plant.


Absolutely. And we can finally infent FTL and fly back in time to stop nuclear to begin with. ¯\_(ツ)_/¯


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: