I'm attending a course with some goat professors, that aim to teach this kind of things. Chances are, given it's a small course you won't produce anything of real value, but the aim should be to analyze 3 datasets from 3 different neurodegenerative diseases, find some common markers and eventually make a ml model to better diagnose people with these diseases. It's not exactly easy, but given it's a real world problem it's extremely motivating
(we've been also allowed to use a university vm with 3tb of ram and that's nice)
No, it's a horrible use, relying on something entirely unreliable to make medical diagnoses. All the AI safety people who pretend to worry about killer robots or whatever should actually be up in arms about these kind of uses, but you can see where actual priorities lie.
The best use of AI in medicine would be to automate away administrative bloat to let people get proper medical care.
The question doesn't make sense imo because it, meaning neural network or other ML computer vision classification, doesn't have a mechanism to be trustworthy. It's just looking for shortcuts with predictive power, it's not reasoning, doesn't have a world model, it's just an equation that mostly works, etc, all the stuff we know about ML. It's not just about validation set performance, you could change the lighting or some camera feature or something, have some unusual mole shape, and suddenly get completely different performance. It can't be "trusted" the way a person can, even if they are less accurate.
These limitations are often acceptable but I think as long as it works how it does, denying someone a person looking at them in favor of a statistical stereotype should be the last thing we do.
I can see if this was in a third world country and the alternative was nothing, but in the developed world the alternative is less profit or fewer administrators. We should strongly reject outsourcing the actual medical care part of healthcare to AI as an efficiency measure.
I understood that you don't believe it can be made reliable. But my question was: what if it were?
Let me put it differently. Suppose I don't tell you it's ML. It's a machine that you don't know how it works, but I let you do all the tests you want, and turns out they're great. Would you then still be against it?
Just reading the title, my first thought was: The advertisers "paid for the internet" for us. Open and free is such a dissonant idea, since the internet costs money to keep running.
Oh, so that money that I pay to my ISP every month? I’m just hallucinating that?
Give me a break.
FB, Twitter, and the rest could go down and I would still be able to use the Internet. It would be more boring for about two weeks or so… then I would get used to it. Maybe go outside more. :)
In the 1990's what you paid to your ISP every month covered it, and included enough web space for everyone to have an average site of their own, which the ISP would serve to the world for you.
Google had no ads because their motto was "Don't Be Evil".
> Under EU law, data-sharing with countries deemed to have lower privacy standards, including the US, are prohibited.
When Android still has a Play Store and non-AOSP versions of it require access to Google services, not to mention that it's all likely to be backdoored for the benefit of the US intelligence agencies (which is the assumption this decision starts from), then effectively the Android Privacy Sandbox is unlikely to make Android legal again in the EU ?