Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The original press release calls it "anti-intelligent" which makes some sense. This blog headline calls it "anti-AI" which makes it sound like it is meant to mess with machine learning training algorithms, but actually it's not really "anti-AI", just "not AI". (Whether ML-based image processing should really qualify as "artificial intelligence" in the first place, just because it uses machine learning algorithms, is an entirely different story, but I guess this is just our lives now.)


Getting

https://www.dxo.com/

which uses ML for denoising high ISO shots was like getting a sun in my pocket for indoor sports photography. You will take ML developers out of my cold dead hands.


Well, I'm not a photographer or anything, and I certainly don't have anything in particular against ML algorithms for image processing, it's just another tool after all. Depending on how it's applied, it might be hard to tell it apart from any other image processing algorithm, and in other cases, it blurs the lines between generative AI and image processing.

On the Internet, Waifu2x has proven popular for quite some time. I don't know how it works architecturally, but they've trained an ML model on anime-style illustrations and photos, specifically to upscale and denoise images (particularly, to reverse JPEG artifacts) using a corpus of images before and after downscaling/adding JPEG artifacts. It is incredible when just using it to denoise JPEG and quite impressive for scaling up to 2x. It definitely works better for anime-style illustrations, which suffer from JPEG artifacts more than photographs do, anyhow.

I also like Google Camera's "Night Sight" feature. It's maybe not astounding anymore, but it definitely was a vast improvement for capturing photos at night using a little smartphone camera when it first came around.

That said, there are pitfalls to these more advanced algorithms. They can have different failure modes than people are used to. People routinely fail to realize how dangerous it can be when a machine is "lying" to you in a way that you can't necessarily comprehend the risks of; even before ML we have plenty of good examples, like Xerox machines performing compression that accidentally altered the numbers on the page[1]. With ML algorithms that pull increasingly more signal out of increasingly less information, the potential for bad extrapolations and downright hallucination certainly must increase. There's been some funny examples of this with the iPhone and Google Camera features, but it really does have some interesting implications. Can we always trust these photos to be legally-admissible, for example, even when they're not altered intentionally? Don't know the answer. It's probably not a huge deal, but at some point, this will surely become an issue, and I bet it will be very interesting (and hopefully not too tragic.)

[1]: https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...?


I remember an article in Science News in the 1980s when image compression algorithms like JPEG were being developed about the concern that you could not use them for medical images because of the risk that artifacts could interfere with diagnosis.

Now I see papers where people train superesolution models on pictures of healthy tissues and tumors and dodn’t seem bothered at all that the model may know how to hallucinate both.


The nice thing about post-production apps like this is, though, that you get to choose when and how to apply them. I couldn't live without ML photography tools for fixing lighting issues when I can't do it in camera.

Sadly, most cameraphones are now just applying ML before handing the data out to the APIs. Getting the raw sensor data from some phones is literally impossible for third party apps now, as I understand it (looking at you Samsung).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: