Hacker Newsnew | past | comments | ask | show | jobs | submit | more CSSer's commentslogin

I'm curious how many people would want a second opinion (from a human) if they're presented with a bad discovery from a radiological exam and are then told it was fully automated.

I have to admit if my life were on the line I might be that Karen.


A bad discovery probably means your exam will be read by someone qualified, like the surgeon/doctor tasked with correcting it.

False negatives are far more problematic.


Ah, you're right. Something else I'm curious about with these systems is how they'll affect difficulty level. If AI handles the majority of easy cases, and radiologists are already at capacity, so they crack if the only cases they evaluate are now moderately to extraordinarily difficult?


The problem is, you don't know beforehand if it's a hard case or not.

A hard to spot tumor is an easy negative result with high confidence by an AI


Let's look at mammography, since that is one of the easier imaging exams to evaluate. Studies have shown that AI can successfully identify more than 50% of cases as "normal" that do not require a human to view the case. If group started using that, the number of interpreted cases would drop in half although twice as many would be normal. Generalizing to CT of the abdomen and pelvis and other studies, assuming AI can identify a sub population of normal scans that do not have to be seen by a radiologist, the volume of work will decline. However, the percentage of complicated cases will go up. Easy, normal cases will not be supplementing the Radiologist income the way it has in the past. Of course, all this depends upon who owns the AI identifying normal studies. Certainly, hospitals or even packs companies would love to own that and generate that income from interpreting the normal studies. AI software has been slow to be adopted, largely because cases still have to be seen by a radiologist, and the malpractice issue has not been resolved. Expect rapid changes in the field once malpractice solutions exist.


From my experience the best person to read these images is the medical imaging expert. The doctor who treats the underlying issue is qualified but it's not their core competence. They'll check of course but I don't think they generally have a strong basis to override the imaging expert.

If it's something serious enough a patient getting bad news will probably want a second opinion no matter who gave them the first one.


I willing to bet every one here has a relative or friend who at some point got a false negative from a doctor.. Just like drivers that have made accidents.. Core problem is how to go about centralizing liability.. or not.


But since we don't know where those false negatives are, we want radiologists.

I remember a funny question that my non-technical colleagues asked me during the presentation of some ML predictions. They asked me, “How wrong is this prediction?” And I replied that if I knew, I would have made the prediction correct. Errors are estimated on a test data set, either overall or broken down by groups.

The technological advances have supported medical professionals so far, but not substituted them: they have allowed medical professionals to do more and better.


Id be more concerned about the false negative. My report says nothing found? Sounds great, do I bother getting a 2nd opinion?


You pay extra for a doctor's opinion. Probably not covered by insurance.


That's horrific. You pay insurance to have ChatGPT make the diagnosis. But you still need to pay out of pocket anyway. Because of that, I am 100% confident this will become reality. It is too good to pass up.


Early intervention is generally significantly cheaper, so insurers have an interest in doing sufficiently good diagnosis to avoid unnecessary late and costly interventions.


People will flock to "AI medical" insurance that costs $50/mo and lets you see whatever AI specialist you want whenever you want.


Think a problem here is the sycophantic nature. If I’m a hypochondriac, and I have some new onset symptoms, and I prompt some LLM about what I’m feeling and what I suspect, I worry it’ll likely positively reinforce a diagnosis I’m seeking.


I mean, we already have deductibles and out-of-pocket maximums. If anything, this kind of policy could align with that because it's prophylactic. We can ensure we maximize the amount we retrieve from you before care kicks in this way. Yeah, it tracks.


It sounds fairly reasonable to me to have to pay to get a second opinion for a negative finding on a screening. (That's off-axis from whether an AI should be able to provide the initial negative finding.)

If we don't allow this, I think we're more likely to find that the initial screening will be denied as not medically indicated than we are to find insurance companies covering two screenings when the first is negative. And I think we're better off with the increased routine screenings for a lot of conditions.


Self-care is being Karen since when?


It's not. I was trying to evoke a world where it's become so common place that you're a nuisance if you're one of those people who questions it.


Need to work on the comedic delivery in written form because you just came off as leaning on a stereotype


"Cancer? Me? I'd like to speak to your manager!"


In reality it's always a good decision to seek a second independent assessment in case of diagnosis of severe illness.

People makes mistakes all the time, you don't want to be the one affected by their mistake.


Everyone has an ego. Everyone wants to exercise their power their way when it's their time to shine. I wish I could upvote this twice. If you are reading the above early in your career, please don't take the comment as cynical. It doesn't have to be. Rather, look for it as the sign that you're ready to find your next role. Companies will never clearly give you that. This is often the closest heuristic you'll find, and if you take advantage of it with the right timing, you can leave with grace. If someone asks you why you're leaving, keep it to yourself.


Delivery seems expensive now because it was only ever made cheap by underpaying workers, giving them no benefits, making them cover their own car costs, and forcing them to rely on tips to survive. The truth is, having someone drive your pad thai and curry across town costs real money, and I’d rather pick it up myself than keep pretending cheap delivery was ever anything but exploitation.


The problem isn’t that delivery itself is exploitation, just the delivery apps. The issue is that the claimed scaling factor that makes the apps work doesn’t exist. Turns out drivers get more money and delivery costs less if you pizza is delivered by a pizzeria employee than a delivery driving app contractor.


> Turns out drivers get more money and delivery costs less if you pizza is delivered by a pizzeria employee than a delivery driving app contractor

This has always been true for pizza, which is why pizza has offered delivery for decades.


and were able to, in most cases, do it in 30 minutes or less. otherwise the 'Noid gets them.


> I’d rather pick it up myself than keep pretending cheap delivery was ever anything but exploitation

Then tip! The delivery driver can do more with that, plus OP's business, than with just your business and well wishes.


Tipping should never be expected and be part of the base salary


> Tipping should never be expected and be part of the base salary

I agree. Here, the choice is between tipping and rendering that person unemployed (or underemployed) because of projected morality. I'm arguing that it's better for the people one purports to help to hand over a tip and not support reducing their work, or worse, to advocate that others not use their services.


No! It is the company’s job to price their service to cover costs. I get to decide if I pay. Tipping does not make exploitation any less real. Of course I tip when necessary. That's besides the point.


> It is the company’s job to price their service to cover costs

They did. They made money. The delivery staff made money--OP is quoting the real, lived experience of actual gig workers. The government came in and decided that was unsavory, and so now those staff are making less (not counting the ones now unemployed).

> it is better to avoid them altogether imo

Not for the delivery driver!


There is always someone willing to work for a dollar. That doesn't mean we should abolish the minimum wage to exploit desperation.

Gig workers are just bullshit countries invented to hide unemployment. They don't ad anything to the economy. Nobody is buying a house or starting a family as a Uber delivery driver.


> doesn't mean we should abolish the minimum wage to exploit desperation

I agree. If all the city had done was raise the minimum wage (and make it applicable to these workers), that would have been fine. They didn't. They added a targeted tax.

> Nobody is buying a house or starting a family as a Uber delivery driver

Not in Seattle, but objectively untrue across the country. But also, I don't think it's fair to say we should render unemployed everyone who has a job that they can't start a family or buy a house on.


Was anyone buying a house or starting a family delivering pizza for dominoes as an employee?


Or, for that matter... as taxi drivers?


You could say that. Professional plumbers often love to use tools built to make the lives of DIY plumbers easier too though. The difference is they know when and when not to do so.


I'm going to set up a honeypot for this.


The framing makes it seem as if valuing stability is somehow inferior while elevating financial growth. This makes me think our financial and housing systems have become so distorted that it has infected our thinking. Adult humans are often super resilient, but kids aren't. Moreover, even adults have limits. I don't want to live in a system filled with chaos just because it makes some intangible number go up. We're all going to die one day. What will we leave behind, material or otherwise? Stability seems very underrated.


The stability comes from the wealth, not the house.


Another protip: the midwest is not every state that isn't New York or California. The South exists too. There is nothing quite like telling someone what state you are from in LA and then being asked, "What was it like growing up in the midwest?" when you didn't.


What do you mean Texas isn't in the midwest?


I used to have a Honeywell wi-fi thermostat. It looked like any other thermostat you've ever seen, except you could connect it to a home hub. It was nice because you could exactly what you're describing, but you could also do it in the app.

What made it worth it was being able to turn off the air or heat when you weren't home automatically. Now all or the "AI training" garbage? Yeah, forget that. I used to work in an office with a nest and it was torture if you showed up too early if stayed a little too late.


I'm inclined to agree with this approach because someone not using AI who fails would likely fail for the same reasons. If you can't logically distill a problem into parts you can't obtain a solution.


I'm not at all skeptical of the logical viability of this, but look at how many company hierarchies exist today that are full stop not logical yet somehow stay afloat. How many people do you know that are technical staff members who report to non-technical directors who themselves have two additional supervisors responsible for strategy and communication who have no background, let alone (former) expertise, in the fields of the teams they're ultimately responsible for?

A lot of roles exist just to deliver good or bad news to teams, be cheerleaders, or have a "vision" that is little more than a vibe. These people could not direct a prompt to give them what they want because they have no idea what that is. They'll know it when they see it. They'll vaguely describe it to you and others and then shout "Yes, that's it!" when they see what you came up with or, even worse, whenever the needle starts to move. When they are replaced it will be with someone else from a similar background rather than from within. It's a really sad reality.

My whole career I've used tools that "will replace me" and every. single. time. all that has happened is that I have been forced to use it as yet another layer of abstraction so that someone else might use it once a year or whenever they get a wild feeling. It's really just about peace of mind. This has been true of every CMS experience I've ever made. It has nothing to do with being able to "do it themselves". It's about a) being able to blame someone else and b) being able to take it and go when that stops working without starting over.

Moreover, I have, on multiple occasions, watched a highly paid, highly effective individual be replaced with a low-skilled entry-level employee for no reason other than cost. I've also seen people hire someone just to increase headcount.

LLMs/AI have/has not magically made things people do not understand less scary. But what about freelancers, brave souls, and independent types? Well, these people don't employ other people. They live on the bleeding edge and will use anything that makes them successful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: