Hacker Newsnew | past | comments | ask | show | jobs | submit | SauntSolaire's commentslogin

Are they actually as simply to deploy as Excel? My guess would be that most streamlit apps never make it further than the computer they're written on.

If you have the right tooling (e.g. Streamlit.io) then yes, literally.

To 'deploy' an Excel file I go to Office365 and create my excel file and hit share

To 'deploy' a Streamlit file I go to Streamlit and write my single file python code (can be a one liner) and hit share


The obvious solution in this scenario is.. to just buy a different hammer.

And in the case of AI, either review its output, or simply don't use it. No one has a gun to your head forcing you to use this product (and poorly at that).

It's quite telling that, even in this basic hypothetical, your first instinct is to gesture vaguely in the direction of governmental action, rather than expect any agency at the level of the individual.


>It's quite telling that, even in this basic hypothetical, your first instinct is to gesture vaguely in the direction of governmental action, rather than expect any agency at the level of the individual.

When "individuals" (which is a funny way to refer to the global generative AI zeitgeist currently in full binge-mode that is encouraging and enabling this kind of behavior) refuse to regulate themselves, they have to be encouraged through external pressures to do so. Industry is so far up it's own ass wrt AI that all it can see is shit, there is no chance in hell that they will self-regulate. They gladly and indiscriminately slurp up the digital effluent that is currently sliding out the colon of the generative AI super-organism.

And, of course, these "individuals" are more than happy to share the consequences with the rest of the world without sharing too much of the corn that they're digging out of the shit. It does not behoove the rest of the world to not protect it's self-interest, to minimize the consequences of foolish and irresponsible generative AI usage and to make sure it gets it's fare share of the semi-digested golden kernels


What an absurd set of equivalences to make regarding a scientist's relationship to their own work.

If an engineer provided this line of excuse to me, I wouldn't let them anywhere near a product again - a complete abdication of personal and professional responsibility.


> Many more people do, however, want to take shortcuts to get their work done with the least amount of effort possible.

Yes, and they are the ones responsible for the poor quality of work that results from that.


> It's not like these are new issues.

Exactly, that's why not verifying the output is even less defensible now than it ever has been - especially for professional scientists who are responsible for the quality of their own work.


Yes, that's what it means to be a professional, you take responsibility for the quality of your work.

Well, then what does this say of LLM engineers at literally any AI company in existence if they are delivering AI that is unreliable then? Surely, they must take responsibility for the quality of their work and not blame it on something else.

I feel like what "unreliable" means, depends on well you understand LLMs. I use them in my professional work, and they're reliable in terms of I'm always getting tokens back from them, I don't think my local models have failed even once at doing just that. And this is the product that is being sold.

Some people take that to mean that responses from LLMs are (by human standards) "always correct" and "based on knowledge", while this is a misunderstanding about how LLMs work. They don't know "correct" nor do they have "knowledge", they have tokens, that come after tokens, and that's about it.


> they're reliable in terms of I'm always getting tokens back from them

This is not what you are being sold though. They are not selling you "tokens". Check their marketing articles and you will not see the word token or synonym on any of their headings or subheadings. You are being sold these abilities:

- “Generate reports, draft emails, summarize meetings, and complete projects.”

- “Automate repetitive tasks, like converting screenshots or dashboards into presentations … rearranging meetings … updating spreadsheets with new financial data while retaining the same formatting.”

- "Support-type automation: e.g. customer support agents that can summarize incoming messages, detect sentiment, route tickets to the right team."

- "For enterprise workflows: via Gemini Enterprise — allowing firms to connect internal data sources (e.g. CRM, BI, SharePoint, Salesforce, SAP) and build custom AI agents that can: answer complex questions, carry out tasks, iterate deliverables — effectively automating internal processes."

These are taken straight from their websites. The idea that you are JUST being sold tokens is as hilariously fictional as any company selling you their app was actually just selling you patterns of pixels on your screen.


it’s not “some people”, it’s practically everyone that doesn’t understand how these tools work, and even some people that do.

Lawyers are running their careers by citing hallucinated cases. Researchers are writing papers with hallucinated references. Programmers are taking down production by not verifying AI code.

Humans were made to do things, not to verify things. Verifying something is 10x harder than doing it right. AI in the hands of humans is a foot rocket launcher.


> it’s not “some people”, it’s practically everyone that doesn’t understand how these tools work, and even some people that do.

Again, true for most things. A lot of people are terrible drivers, terrible judge of their own character, and terrible recreational drug users. Does that mean we need to remove all those things that can be misused?

I much rather push back on shoddy work no matter what source. I don't care if the citations are from a robot or a human, if they suck, then you suck, because you're presenting this as your work. I don't care if your paralegal actually wrote the document, be responsible for the work you supposedly do.

> Humans were made to do things, not to verify things.

I'm glad you seemingly have some grand idea of what humans were meant to do, I certainly wouldn't claim I do so, but I'm also not religious. For me, humans do what humans do, and while we didn't used to mostly sit down and consume so much food and other things, now we do.


>A lot of people are terrible drivers, terrible judge of their own character, and terrible recreational drug users. Does that mean we need to remove all those things that can be misused?

Uhh, yes??? We have completely reshaped our cities so that cars can thrive in them at the expense of people. We have laws and exams and enforcement all to prevent cars from being driven by irresponsible people.

And most drugs are literally illegal! The ones that arent are highly regulated!

If your argument is that AI is like heroin then I agree, let’s ban it and arrest anyone making it.


People need to be responsible for things they put their name on. End of story. No AI company claims their models are perfect and don’t hallucinate. But paper authors should at least verify every single character their submit.

>No AI company claims their models are perfect and don’t hallucinate

You can't have it both ways. Either AIs are worth billions BECAUSE they can run mostly unsupervised or they are not. This is exactly like the AI driving system in Autopilot, sold as autonomous but reality doesn't live up to it.


Yes, but they don’t. So clearly AI is a foot gun. What are doing about it?

It's a shame the slop generators don't ever have to take responsibility for the trash they've produced.

That's beside the point. While there may be many reasonable critiques of AI, none of them reduce the responsibilities of scientist.

Yeah this is a prime example of what I'm talking about. AI's produce trash and it's everyone else's problem to deal with.

Yes, it's the scientists problem to deal with it - that's the choice they made when they decided to use AI for their work. Again, this is what responsibility means.

This inspires me to make horrible products and shift the blame to the end user for the product being horrible in the first place. I can't take any blame for anything because I didn't force them to use it.

>While there many reasonable critiques of AI

But you just said we weren’t supposed to criticize the purveyors of AI or the tools themselves.


No, I merely said that the scientist is the one responsible for the quality of their own work. Any critiques you may have for the tools which they use don't lessen this responsibility.

>No, I merely said that the scientist is the one responsible for the quality of their own work.

No, you expressed unqualified agreement with a comment containing

“And yet, we’re not supposed to criticize the tool or its makers?”

>Any critiques you may have for the tools which they use don't lessen this responsibility.

People don’t exist or act in a vacuum. That a scientist is responsible for the quality of their work doesn’t mean that a spectrometer manufacture that advertises specs that their machines can’t match and induces universities through discounts and/or dubious advertising claims to push their labs to replace their existing spectrometers with new ones which have many bizarre and unexpected behaviors including but not limited to sometimes just fabricating spurious readings has made no contribution to the problem of bad results.


You can criticize the tool or its makers, but not as a means to lessen the responsibility of the professional using it (the rest of the quoted comment). I agree with the GP, it's not a valid excuse for the scientist's poor quality of work.

I just substantially edited the comment you replied to.

The scientist has (at the very least) a basic responsibility to perform due diligence. We can argue back and forth over what constitutes appropriate due diligence, but, with regard to the scientist under discussion, I think we'd be better suited discussing what constitutes negligence.

The entire thread is people missing this simple point.

If your brain isn't able to work at the same rate as a healthy person, what's the argument for why grades shouldn't reflect that?

Put another way, if my brain works at a slower rate than the genius in my class, is it then unfair if my grades don't match theirs?

In general these seem like reasonable differences to consider when hiring someone for a job.


You phrase it like the police can force you to run from them.


I would love to see more comprehensive stats to answer this question, rather than relying on cases studies you have to go back over one hundred years to find.


> over one hundred years

Look, I know I'm old and it feels like it but 1980 is absolutely not one hundred years ago.

> I would love to see more comprehensive stats to answer this question

Have some more recent California examples (between 1994 when they created the law and 2012 when it was loosened): "[...] given life sentences for offenses including stealing one dollar in loose change from a parked car, possessing less than a gram of narcotics, and attempting to break into a soup kitchen."[0]

[0] https://law.stanford.edu/three-strikes-project/three-strikes...


1912 is over one hundred years ago, which is obviously what I was referring to.

My point is you're just pulling out a few incidents, and not even very many at that. I would like to see real stats on the subject, but it seems you're working under the "plural of anecdote is data" theory.


From my pseudo-ivory tower viewpoint it seems like the concept of 3 strikes has some validity but with totally the wrong response.

If someone is convicted three times of stealing in a year, even if it's like 1$, clearly something is not working here between this person and the system. It's a pipe dream but it would be nice if we could have some kind of board you could refer cases like that to with the mission statement of "figure out exactlt what is going on here" with powers to take actions that involved things other than prisons.

Alas.


> convicted three times of stealing in a year [...] clearly something is not working here between this person and the system.

Yep, it's definitely a "this person needs some kind of help" signifier.

I can see the logic of "three top-line serious felonies" -> much more severe punishment (even though, I believe, more severe punishment doesn't actually tend to reduce recidivism but I guess if you get life without parole, that's not a huge issue) - if someone commits three distinct murders[0], obviously there's a problem with letting them loose in polite society.

> powers to take actions that involved things other than prisons.

I think various places have tried things like that and (IIRC) they tend to work out well - people get reintegrated into society, they don't reoffend, etc. - but all it takes is one agitator (right wing paper or politician looking for cheap points) to bring up the "soft on crime" angle and it all goes out the window.

[0] obvs. without justification - if they've killed in self-defence three times, that's different than three unprovoked straight out murders, but you'd still want some kind of "look, maybe don't go places where you end up in fights etc." conversation.


You're literally just making up scenarios in your head.


No they're not, people have irrational reactions to things all the time, especially under stress. Getting startled, panicking, and fleeing is definitely one of those.

People will confess to crimes they didn't commit if the police are persuasive enough, that's why such evidence is illegal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: