Hacker Newsnew | past | comments | ask | show | jobs | submit | eterevsky's commentslogin

It's a good advice, if you are watching on a reference monitor in a dark room.


James Cameron is one of the few who do this.


But the high FPS version is only in cinemas


Avatar is really only worth watching in 3D in theaters anyway, the story is nothing special if you need to sit through it for 3.5 hours at home.


I think whether any text is written with the help of AI is not the main issue. The real issue is that for texts like police reports a human still has to take full responsibility for its contents. If we preserve this understanding, than the question of which texts are generated by AI becomes moot.


Sadly justice system is a place where responsibility does not happen. It is not a system where you make one mistake and you are to prison. Instead everyone but the victims of the system are protected and colluded with. More you punish the victims better you make out.


By the end of your paragraph, you decided that criminals are victims.


Everyone the policing system interacts with is not only a convinced criminal, but convicted justly, yeah? That's what you actually believe?


No, that's not what I believe, I didn't say any of those words.


The person you replied to didn't say "criminals are victims", either, so comprehending your post requires some inference.

Feel free to clarify what you did mean, it's a lot more helpful than insisting on what you didn't mean.


It's not helpful. You put a lot of words into my mouth, I deny them. I already made my point in my own words. I don't have to deny every position you make up for me.

If you read his comment, he refers to everyone going through the system as victims of the colluding judges and LEO. Almost none of them are. The victims are the people whom they committed crimes against, of course.

This isn't my position, it's just the language we use to describe reality.


You're putting words into the other user's mouth, though, by assuming they mean "everyone" and not just "the innocent proportion of accused people".

So this seems like a good place to take your own advice, right?


No, I responded to their comment as-is. It's you putting words in someone's mouth again, not I.

The way you can tell what they mean is this line: "More you punish the victims better you make out." Nobody in America thinks that judges make out for punishing actually-innocent people. That's not what "victims" means here.


They didn't say "judges". I think the American justice system has a grandiose disregard for its impacts on the falsely accused.

You're doing the thing again, even now.


I agree, I think it does as well. That's not what the guy said, though.

I think I'm just reading what his comment says, but maybe I am doing the same thing as you. I'm just better at it. I got his position correct, you got it wrong.

You told me what I believe, you got that wrong, too.


I didn't tell you, I asked :)

From my POV you seem to be making a lot of inferences and then declaring them correct, based on some information about the original intent that I must not be privy to?


I agree. A programmer has to take responsibility for the generated code they push, and so do police officers for the reports they file. Using a keyboard does not absolve you of typos, it's your responsibility to proofread and correct, this is no different, just a lot more advanced.

Of course the problem is also that police often operates without any real oversight and covers up more misconduct than workers in an under-rug sweeping factory. But that's another issue.


> But that's another issue.

...is it?

It seems to me that the growth of professional police as an institution which bears increased responsibility for public safety, along with an ever-growing set of tools that can be used to defer responsibility (see: it's not murder if it's done with a stun gun, regardless of how predictable these deaths are), are actually precisely the same issue.

Let's stop allowing the state to hide behind tooling, and all be approximately equally responsible for public safety.


Yes. Allowing officers to blame AI creates a major accountability gap. Per e.g. the EU AI Act’s logic, if a human "edits" a draft, they must be held responsible and do not need to disclose the use of AI.

To ensure safety, those offerings must use premarket red teaming to eliminate biases in summarization. However, ethical safety also requires post-market monitoring, which is impossible if logs aren't preserved. Rather than focusing on individual cases, I think, we must demand systemic oversight in general and access for independent research (not only focussing on a specific technology)


It should be treated kind of the same as writing a report after a glass of wine. Probably no one really cares but "sorry that doesn't count because I was intoxicated when I wrote that bit" isn't going to fly.


> for texts like police reports

If what you mean is, "texts upon which the singular violence of the state is legitimately imposed", then a simple solution (and I believe, on sufficiently long time scales, the happily inevitable one) is to abolish police.

I can't fathom, in an age where we have ubiquitous cameras as eyewitnesses, instant communications capability to declare emergencies and request aid from nearby humans, that we need an exclusivity entity whose job it is to advance safety in our communities. It's so, so, so much more trouble that it's worth.


I don’t understand the urgency to replace human work with AI. Why is every organization so eager about skipping the AI as an assistant step? Here there are already massive productivity gains in using the AI to create the draft of the report, it makes little economical to make it do the final version compared to the risk, maybe it’s just plain laziness? Same with developers, why is very organization wanting to leapfrog from humans write all the code to they don’t even read the generated code?


Not everyone is in an urgent hurry to replace people with bots; that's a hyperbolic construct.

But to try to answer some of what I think you're trying to ask about: The bot can be useful. It can be better at writing a coherent collection of paragraphs or subroutines than Alice or Bill might be, and it costs a lot less to employ than either of them do.

Meanwhile: The bot never complains to HR because someone looked at them sideways. The bot [almost!] never calls in sick; the bot can work nearly 24/7. The bot never slips and falls in the parking lot. The bot never promises to be on-duty while they vacation out-of-state with a VPN or uses a mouse-jiggler to screw up the metrics while they sleep off last night's bender.

The bot mostly just follows instructions.

There's lots of things the bot doesn't get right. Like, the stuff it produces may be full of hallucinations and false conclusions that need reviewed, corrected, and outright excised.

But there's lots of Bills and Alices in the world who are even worse, and the bot is a lot easier and cheaper to deal with than they are.

That said: When it comes to legal matters that put a real person's life and freedom in jeopardy, then there should be no bot involved.

If a person in a position of power (such as a police officer) can't write a meaningful and coherent report on their own, then I might suggest that this person shouldn't ever have a job where producing written reports are a part of their job. There's probably something else they're good at that they can do instead (the world needs ditchdiggers, too).

Neither the presence nor absence of a bot can save the rest of us from the impact of their illiteracy.


and bot doesnt bare any responsibility


Because the biggest cost at a lot of orgs is staff. Your typical software shop will be comical—the salary costs towering down on all the others like LeBron James gazing down at ants. The moment you go from productivity gains to staff reduction you start making real money. Any amount of money for a machine that can fully replace a human process.


In all seriousness, XSLT looked stillborn even 25 years ago when it was introduced.


Agree. It always seemed like a strange and poorly conceived technology to me.


It was just castrated DSSSL.


If they are replacing a fixed cosmological constant by a model with variable dark energy, doesn't it introduce extra parameters that describe the evolution of dark energy over time? If so, wouldn't it lead to overfitting? Can overfitting alone explain better match of the new model to the data?


Probably not for 32” monitor, but I think 8k would be noticeably better for a 43”.


The plan is to improve AI agents from their current ~intern level to a level of a good engineer.


They are not intern level.

Even if it could perform at a similar level to an intern at a programming task, it lacks a great deal of the other attributes that a human brings to the table, including how they integrate into a team of other agents (human or otherwise). I won't bother listing them, as we are all humans.

I think the hype is missing the forest for the trees, and I think exactly this multi-agent dynamic might be where the trees start to fall down in front of us. That and the as currently insurmountable issues of context and coherence over long time horizons.


My impression is that Copilot acts a lot like one of my former coworkers, who struggled with:

-Being a parent to a small child and the associated sleep deprivation.

-His reluctance to read documentation.

-There being a language barrier between him the project owners. Emphasis here, as the LLM acts like someone who speaks through a particularly good translation service, but otherwise doesn't understand the language spoken.


The real missing the forest for the trees is thinking that software and the way users will use computers is going to remain static.

Software today is written to accommodate every possible need of every possible user, and then a bunch of unneeded selling point features on top of that. These massive sprawling code bases made to deliver one-size fits all utility.

I don't need 3 million LOC Excel 365 to keep track of who is working on the floor on what day this week. Gemini 2.5 can write an applet that does that perfectly in 10 minutes.


I don't know. I guess it depends on what you classify as being change. I don't really view software as having changed all that much since around maybe the mid 70s as HLLs began to become more popular. What programmers do today and what they did back then would be easily recognizable to both groups if we had time machines. I don't see how AI really changes things all that much. It's got the same scalability issues that low code/no code solutions have always had and those go way back. The main difference is that you can use natural language, but I don't see that as being inherently better than say drawing a picture using some flowcharting tools in a low code platform. You just introduce the same problem natural languages always have had and why we didn't choose them in the first place, i.e. they are not strict enough and need lots of context. Giving an AI very specific sentences to define my project in natural language and making sure it has lots of context begins to look an awful lot like psuedocode to me. So as you learn to approach using AI in such a way that it produces what you want you naturally get closer and closer to just specifying the code.

What HAS indisputably changed is the cost of hardware which has driven accessibility and caused more consumer facing software to be made.


I don't believe it will remain static, in fact it's done nothing but change every year for my entire career.

I do like the idea of smaller programs fitting smaller needs being easy to access for everyone, and in my post history you would see me advocate for bringing software wages down so that even small businesses can have software capabilities in house. Software has so much to give to society outside of big VC flips and tech monoliths. Maybe AI is how we get there in the end.

But I think that supplanting humans with an AI workforce in the very near future might be stretching the projection of its capabilities too far. LLMs will be augmenting how businesses operate from now and into the future, but I am seeing clear roadblocks that make an autonomous AI agent unviable, and it seems to be fundamental limitations of LLMs, eg continuity and context. Advances recently seem to be from supplemental systems that try to patch those limitations. That suggests those limits are tricky, and until a new approach shows up, that is what drives my lack of faith in an AI agent revolution.

But it is clear to me that I could be wrong, and it could be a spectacular miscalculation. Maybe the robots will make me eat my hat.


Seems like that is taking a very long time, on top of some very grandiose promises being delivered today.


I look back over the past 2-3 years and am pretty amazed with how quick change and progress have been made. The promises are indeed large but the speed of progress has been fast. Not defending the promise but “taking a very long time” does not seem to be an accurate representation.


I feel like we've made barely any progress. It's still good at the things Chat GPT was originally good at, and bad at the things it was bad at. There's some small incremental refinement but it doesn't really represent a qualitative jump like Chat GPT was originally. I don't see AI replacing actual humans without another step jump like that.


As a non-programmer non-software engineer, the programs I can write with modern SOTA models are at least 5x larger than the ones GPT-4 could make.

LLMs are like bumpers on bowling lanes. Pro bowlers don't get much utility from them. Total noobs are getting more and more strikes as these "smart" bumpers get better and better at guiding their ball.


> The promises are indeed large but the speed of progress has been fast

And at the same time, absurdly slow? ChatGPT is almost 3 years old and pretty much AI has still no positive economic impact.


There is the huge blind spot where tech workers think LLMs are being made primarily to either assist them or replace them.

Nobody seems to consider that LLMs are democratizing programming, and allowing regular people to build programs that make their work more efficient. I can tell you that at my old school manufacturing company, where we have no programmers and no tech workers, LLMs have been a boon for creating automation to bridge gaps and even to forgo paid software solutions.

This is where the change LLMs will bring will come from. Not from helping an expert dev write boilerplate 30% faster.


Low code/no code/visual programming has been around forever. They all had issues. LLMs will also have the same issues and cost even more.


I'm not aware of any that you speak/type plain English to.


You never heard of COBOL? It's original premise was you can now use something resembling English to write programs.


Saying “AI has no economic impact” ignores reality. The financials of major players clearly show otherwise—both B2C and B2B applications are already profitable and proven. While APIs are still more experimental, and it’s unclear how much value businesses can ultimately extract from them, to claim there’s no economic impact is willful blindness. AGI may be far off, but companies are already figuring out value from both the consumer side and slowly API.


The financials are all inflated by perception of future impact. This includes the current subscriptions as businesses are attempting to use AI to some economic benefit, but it's not all going to work out to be useful.

It will take some time for whatever reality is to actually show truthfully in the financials. When VC money stops subsidising datacentre costs, and businesses have to weigh the full price against real value provided, that is when we will see the reality of the situation.

I am content to be wrong either way, but my personal prediction is if model competence slows down around now, businesses will not be replacing humans en-mass, and the value provided will be notable but not world changing like expected.


OpenAI alone is on track to generate as much revenue as Asus or US Steel this year ($10-$15 billion). I don't know how you can say AI has had no positive economic impact.


That is not even 1 month of a big tech revenue, it is a global negligible impact. 3 years talking about AI changing the world, 10bi revenue and no ecosystem around making money besides friends and VCs pumping and dumping LLM wrappers.


There's a pretty wide gulf between being one of the most important companies in the global marketplace as Microsoft, Apple, and Amazon are and "having no economic impact".

I agree that most of the AI companies describe themselves and their products in hyperbolic terms. But that doesn't mean we need to counter that with equally absurd opposing hyperbole.


There is no hyperbole. I think AI will change the world in the next 10 years but comparing to the iphone, for example, 3 years the economic impact was much, much bigger and that is just one brand of smartphones.


Revenue, not profit.

If it costs them even just one more dollar than that revenue number to provide that service (spoiler, it does), then you could say AI has had no positive economic impact.

Considering we know they’re being subsidized by obscene amounts of investment money just like all other frontier model providers, it seems pretty clear it’s still a negative economic impact, regardless of the revenue number.


And what is their burn rate? Everyone fails to mention the amount they are spending for this return.


I guess it probably depends on what you are doing. Outside of layers on top of these things (tooling), I personally haven't seen much progress.


What a time we live in. I guess it depends how pessimistic you are.


To their point, there hasn’t been any huge breakthrough in this field since the “attention is all you need” paper. Not really any major improvements to model architecture, as far as I am aware. (Admittedly, this is a new field of study to me.) I believe one hope is to develop better methods for self-supervised learning; I am not sure of the progress there. Most practical improvements have been on the hardware and tooling side (GPUs and, e.g., pytorch).

Don’t get me wrong: the current models are already powerful and useful. However, there is still a lot of reason to remain skeptical of an imminent explosion in intelligence from these models.


You’re totally right that there hasn’t been a fundamental architectural leap like “attention is all you need”, that was a generational shift. But I’d argue that what we’ve seen since is a compounding of scale, optimization, and integration that’s changed the practical capabilities quite dramatically, even if it doesn’t look flashy in an academic sense. The models are qualitatively different at the frontier, more steerable, more multimodal, and increasingly able to reason across context. It might not feel like a revolution on paper, but the impact in real-world workflows is adding up quickly. Perhaps all of that can be put in the bucket of “tooling” but from my perspective there has still been quite large leaps looking at cost differences alone.

For some reason my pessimism meter goes off when I see single sentence arguments “change has been slow”. Thanks for brining the conversation back.


I'm all for flashy in academic sense, because we can let engineers sort out the practical aspects, especially by combining flashy academic approach. The flaw from LLM architecture can be predicted from the original paper, no amount of engineering can compensate that.


[flagged]


Feel free to share resources, but I am speaking purely in terms of practicality related to my day to day.


> I look back over the past 2-3 years and am pretty amazed with how quick change and progress have been made.

Now look at the past year specifically, and only at the models themselves, and you'll quickly realize that there's been very little real progress recently. Claude 3.5 Sonnet was released 11 months ago and the current SOTA models are only marginally better in terms of pure performance in real world tasks.

The tooling around them has clearly improved a lot, and neat tricks such as reasoning have been introduced to help models tackle more complex problems, but the underlying transformer architecture is already being pushed to its limits and it shows.

Unless some new revolutionary architecture shows up out of nowhere and sets a new standard, I firmly believe that we'll be stuck at the current junior level for a while, regardless of how much Altman & co. insist that AGI is just two more weeks away.


Third AI Winter from overpromise/underdeliver when?


Third? It’ll be the tenth or so.


You are really underselling interns. They learn from a single correction, sometimes even without a correction, all by themselves. Their ability to integrate previous experience in the context of new problems is far, far above what I've ever seen in LLMs


Yes but they are supposed to be PhD level 5 years ago if you are listening to sama et al.


Especially ironic considering he's neither a developer nor a PhD. He's the smooth talking "MBA idea guy looking for a technical cofounder" type that's frequently decried on HN.


Without handholding (aka being used as a tool by a competent programmer instead of as an independent “agent”), they’re currently significantly worse than an intern.


This looks much worse than an intern. This feels like a good engineer who has brain damage.

When you look at it from afar, it looks potentially good, but as you start looking into it for real, you start realizing none of it makes any sense. Then you make simple suggestions, it does something that looks like what you asked, yet completely missing the point.

An intern, no matter how bad it is, could only waste so much time and energy.

This makes wasting time and introducing mind-bogglingly stupid bugs infinitely scalable.


The plan went from the AI being a force multiplier, to a resource hungry beast that have to be fed in the hope it's good enough to justify its hunger.


I mean, I think this is a _lot_ worse than an intern. An intern isn't constantly going to make PRs with failing CI, for a start.


I plan to be a billionaire


This article seems to argues from the way scientific discoveries are made by humans. It seems to me that its gist is similar to some article from the 80s that claims that computers will never play good chess, or an article from the 2000s that claims the same for go.

The general shape of these arguments is: "Playing chess/go well, or making scientific discoveries requires specific way of strategic thinking or the ability to form the right hypotheses. Computers don't do this, ergo they won't be able to play chess or make scientific discoveries".

I don't think this is a very good frame of reasoning. A scientific question can take one of the following shapes:

- (Mathematical) Here's a mathematical statement. Prove either it or its negation.

- (Fundamental natural science) Here're the results of the observations. What are the simplest possible model that explains all of them?

- (Engineering) We need to do X. What's an efficient way of doing it?

All of these questions could be solved in a "human" way, but it also possible to train AIs to approach them without going through the same process as the human scientists.


> but it also possible to train AIs to approach them without going through the same process as the human scientists

With chess the answer was more or less completely brute force the problem space, but will that work with math / science? Is there a way to widely explore the problem space with AI, especially in a way that goes above or even against the contents of it's training data? I don't know the answer, but that seems to be the crucial question here.


If we achieved local maximum at something, the only way to progress is to make a big leap that brings you out of it. The trouble is that most of such big leaps are unsuccessful. For every case like you are describing there are probably hundreds or thousands of people who tried to do it and ended up with something worse than the status quo.


ZIP codes are a simple approximation, which does their job good enough in most cases.

The alternatives that the author suggests are much more complicated, both in terms of the implementation and in terms of convincing the user to give you their full address.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: