OpenCL was released in 2009. AMD has had plenty of time to push and drive that standard. But OpenCL had a worse experience than CUDA, and AMD wasn't up to the task in terms of hardware, so it made no real sense to go for OpenCL.
With volume-based fees you only have one type of user. Costs and benefits are predictable, and it's clear who you're providing the service to (the writers) so the company's goals are better aligned with those of their clients: attract more people to the platform. It's the writer the one tasked with deciding how to monetize those subscribers.
On the other hand, taking revenue off paid subscriptions means they have two types of users, free and paying. Even inside paying customers they'll have classes, as not all subscriptions cost the same. This means that the company has an incentive to implement dark patterns that convert users from free to paid, or from lower tiers to higher tiers, even when that is not in the best interest of the writer.
But there are 3 parties here: writer, reader and platform, for want of a better word.
It needs to reward them all to a point it is sustainable.
I don't frequent Substack much so maybe it's not apparent to me, but given it's business model promoting paid should be expected surely? Is that a dark pattern? Certainly feels it's low down the spectrum when compared with most online marketplaces.
I feel like this analogy is not very appropriate. The main problem with AI generated images and videos is that, with every improvement, it becomes more and more difficult to distinguish what's real and what's not. That's not something that happened with literacy or printing press or computers.
Think about it: the saturation of content on the Internet has become so bad that people are having a hard time knowing what's true or not, to the point that we're having again outbreaks of preventable diseases such as measles because people can't identify what's real scientific information and what's not. Imagine what will happen when anyone can create an image of whatever they want that looks just like any other picture, or worse, video. We are not at all equipped to deal with that. We are risking a lot just for the ability to spend massive amounts of compute power on generating images. It's not curing cancer, not solving world hunger, not making space travel free, no: it's generating images.
It definitely is easier without AI. Before, if you saw a photo you could be fairly confident that most of it was real (yes, photo manipulation exists but you can't really create a photo out of nothing). Videos, far more trustworthy (and yes, I know that there's some amazing 3D renders out there but they're not really accessible). With these technologies and the rate at which they're improving, I feel like that's going out of the window. Not to mention that the more content that is generated, the easier it is that something slips by despite being fake.
To be honest, I don't know if I would have said that. I think that employees in "creative" positions (e.g., you're not serving customers at a bar and need to keep the shop open and working) are paid based on the output they produce. Whether that output is produced in 2, 6 or 8 hours is not really their problem. Looking at it this way, they would have happily paid you 100% of your salary for the same output you're now giving them with a 20% discount.
I work in academia and this was a tenure track position - these are quite rare and I have the feeling that only those will get these positions that can work 100% in 50% of time anyway (which accounts to 200% of work in a regular 40-hour job). I just wanted to make absolutely sure that I will have a limit of 6 hours a day and can take care of my body afterwards. Sitting 8 hours is just not doable anymore at close to 40, at least not for me.
One time I tried to gain weight just to check if I was able to, as I have been unable to gain any significant weight in a variety of situation (sport, not sport; young, not that young, better/worse diet). I ate a calorie surplus diet (I don't remember the exact numbers, it was a few years ago), which meant eating more than usual but not too much either. This was measuring calories, weighting everything, all the drill; meanwhile I wasn't doing exercise other than walking/biking to places. Again, I don't recall the numbers but I gained maybe 10% of what I was supposed to gain. I've also seen how eating the exact same as other people (sometimes more) results in the other person gaining weight and me losing it, despite the differences in exercise being fairly irrelevant.
> I wonder what the biggest hindrances are for software development to become this way.
Context and complexity. Jumping into a codebase and suggesting changes that add value to the company usually require having context of what the company is doing, who are the users, what's the product used for, etc. Not to mention how a lot of codebases are complex and understanding where and how to do changes is not something you can just do straight away.
> There are such obvious first steps we could take without a complete understanding of cause if we really cared.
That's the issue, a lot of people just don't care or refuse to believe it. Questions like the one in the parent comment are starting to seriously annoy me because they're demanding an unachievable level of evidence before doing anything, a level that's never asked for in any other kind of intervention (for example, interest rate hikes).
Everyone leaves out a large cohort - people that believe it, but don't want the solutions you're selling. We typically put forth constructive solutions like nuclear, which get shouted down / laughed out of the room instantly.
Not really, I don't distinguish between the different ways to solve it. I wish that was the discussion, but I feel like any solution that has any costs of any kind is just ignored because a lot of people don't feel like this is serious enough to do anything serious about it, doesn't matter whether it's reducing consumption, massive switch to nuclear, or whatever.
An easier first step before nuclear is reducing our power requirements. This one gets laughed out of the room both because we don't think purple will accept it and because there's a direct correlation between GDP growth and energy consumption, we don't care enough about energy use to give up GDP.
Jevon's Paradox is a thought experiment focused on increased efficiency not decreased demand. History is full of examples where a decrease on demand didn't lead to an increase in ridiculous uses for the technology.
Unfortunately it's hard to tell the difference between people suggesting expensive and slow solutions because they're ill informed about the alternatives and the people suggesting expensive and slow solutions because they don't want to solve the problem.
And practically there's little difference, especially if the former are getting their information from the latter.
I think people are fine about solving the problem, positive even (right wingers love solar panels, after all). They're just not fine about being forced to sacrifice to solve the problem.
Without going to specifics, a large reason for that may be that among those who say this are a large number who loudly push for unrealistic ideas masquerading them as constructive. Some may be honestly ignorant about it, but it is also unfortunately quite an effective way to derail any discussion and delay progress. See for example any suggestion about real world hyperloops.
I don't think it's as easy as "simulating cellular processes". I feel like sometimes HN tends to severely underestimate the difficulty of problems in other fields. Just off the top of my head I can see several hard problems in this situation
- How precise do you want the simulation? Do you want physical processes fully simulated? You'd need that to get an accurate simulation of molecular interactions, but full physical simulation is computationally expensive even for single, simple molecules.
- Even simplificated simulations are hard. Simulating just the shape of DNA strands is computationally very expensive (usually done with Monte Carlo simulations).
- How many cells do you simulate? The body has ~37 trillion cells. Even if it only took one processor cycle to simulate a cell, you'd need 9250 4GHz processors. That should give an idea of how hard it would be to simulate just an organ.
- How do you take into account interactions? There's a lot of difficulty in understanding how drugs affect the whole human system. Lots of trials show promising results against in-vitro cells and then fail spectacularly in animal models. There's a lot influencing how drugs work and single cellular processes are just a small part of it. The body is incredibly complex.
- How do you validate the models? It's not like we can go into a cell and see where the molecules are. We don't have enough visibility into actual cellular processes to build such complex models to a sufficient degree of accuracy.
Compared to even a single human cell, a monstruous language model is a trivial thing.
All of those things you say are correct, but scientists are doing it anyway[^1]. In that paper, you will see there are ways to "compress" the amount of computation needed.
Five years ago, I would have said that we were further away from having a computer correctly interpret a joke (i.e., I would have agreed with [2]) than from simulating E.Coli. The thing is, people went and did (both) anyway. But they did one more than the other. Research effort and capital flowed, and now we have LLMs. IMO, a big reason for this paradox is that there is no stigma in wanting to make a computer smarter, and everybody started playing with computer code and data and due to the huge amount of effort, there have been results. But when it comes to the very things that keep us alive, we are not so eager to play with computer code. That, I think, has less to do with the complexity of the subject and more with a certain moral disposition...which is the thing I find perplexing.
> But when it comes to the very things that keep us alive, we are not so eager to play with computer code. That, I think, has less to do with the complexity of the subject and more with a certain moral disposition...which is the thing I find perplexing.
Honestly it's the first time I hear about moral stigma having anything to do with research. Again, I think you underestimate how much harder it is to simulate biological processes than LLMs. Even from the paper you've linked, you'll see the massive amount of simplifications they had to do: they're using a minimal cell, not all metabolites are included, multimeric proteins are left out/replaced, spatial distributions are simplified, the simulation timescale is below 10μs, they do not simulate reactive processes, they do not talk about how much time did they need to perform the simulation... Don't get me wrong, it's a massive achievement. But the amount of computation that needs to be done just to simulate a single cell is absolutely massive, let alone simulating multiple cells in a system. Considering how much it would cost it's no wonder other avenues are explored first.
I honestly don't get the criticisms of the Prometheus + Grafana stack.
> A full-blown time-series database (with gigabytes of rolling on-disk data).
Prometheus has a setting that allows you to limit the space used by the database. I'm not sure however how one can do monitoring without a time-series database.
> Several Go binaries dozens of megabytes each, also consuming runtime resources.
Compared to most monitoring tools I've tested, the Prometheus exporters are usually fairly lightweight in relation to the amount of metrics they generate. Also, "several dozens of megabytes" doesn't seem like too much when we're usually talking about disk spaces in the gigabytes...
> Lengthy configuration files and lengthy argument lists to said binaries.
Configuration files, yes if you want to change all the defaults. Argument lists, not really. In reality, a Docker deployment of Grafana + Prometheus is 20 lines in a docker-compose.yml file. Configuration files come with defaults if you install it to the system.
By the way, I'm not sure that configuring a FastCGI server will be easier than configuring a Docker compose file...
> Systems continuously talking to each other over the network (even when nobody is looking at any dashboard), periodically pulling metrics from nodes into Prometheus, which in turn runs all sorts of consolidation routines on that data. A constant source of noise in otherwise idling systems.
Not necessarily. Systems talk to each other over the network if you configure them to do so. You can always install a Prometheus + Grafana on every node if you don't want to do central monitoring and you'll have no network noise.
> A mind-boggingly complex web front-end (Grafana) with its own database, tons of JavaScript running in my browser, and role-based access control over multiple users.
Grafana, complex? I think dragging and dropping panels with query builders that don't even require you to know the query language are far better than defining graphs in shell scripts.
> A bespoke query language to pull metrics into dashboards, and lots of specialized knowledge in how to build useful dashboards. It is all meant to be intuitive, but man, is it complicated!
Again, this is not a problem of the stack. Building useful dashboards is complicated no matter what tool you use.
> maintenance: ongoing upgrades & migrations
Not really. Both Prometheus and Grafana are usually very stable and you don't need to upgrade if you don't want to. I have a monitoring stack built with it in my homelab and I haven't updated it in two years, and it still works. Of course I don't have the new shiny features, but it works.
To me, it seems that the author is conflating the complexity of the tool with the complexity of monitoring itself. Yes, monitoring is hard. Knowing which metrics to show, which to pull, how to retain them, it's hard. Knowing how to present those metrics to users is also hard. But this tool doesn't solve that. In the end, I don't know how useful it is to make a custom tool that collects very limited metrics based on other ancient, limited, buggy tools (SNMP, RRD, FastCGI...) that is missing even basic UX features like being able to zoom or pan on charts.