Hacker Newsnew | past | comments | ask | show | jobs | submit | ConorSheehan1's commentslogin

Very interesting! A tutorial to check if kimwolf is running on your network would be nice


Not exactly the answer but if you have one of the affected mentioned devices it should be listening on TCP port 5555. You can do a port scan for that.

   nmap -Pn 192.168.0.0/16 -p 5555
Replace netmask as appropriate.

Now that it's publicly known I guess it's possible that they will close the door post-infection to avoid detecton. And it won't detect any other devices it's spread further to.

If you have a cheapo Android-based TV box or stick like the ones mentioned, throw it out or reflash it with Armbian after forensics.

I'm sure there are HN readers out there who have one of these. They were very popular a couple of years back.


Based on the article, try looking for android devices with adb running on the network.


This article[0] includes a link to a online checker: https://synthient.com/check

Have not tested it myself ymmv.

[0] https://synthient.com/blog/a-broken-system-fueling-botnets


It only references a database of publicly scanned IPs, it won't help you if the device is behind a nat router.


Does someone know if the port must be 5555 for this botnet?


It's the Android debugger port, and it's used for infection, but the article doesn't exclude other methods nor mentions ports used by the malware.


Well the first thing to check is, do you own and operate any of these janky Android "TV" boxes sold by companies nobody has heard of? If yes? Then there's probably your answer.


My understanding is it can't. The proof is "this photo was taken with this real camera and is unmodified". There's no way to know if the photo subject is another image generated by AI, or a painting made by a human etc.


^^This so much.

I remember when snapchat were touting "send picture that delete within timeframes set by you!" and all that would happen is you'd turn to your friend and have them take a picture of your phone.

In the above case, the outcome was messy. But with some effort, people could make reasonable quality "certified" pictures of damn near anything by taking a picture of a picture. Then there is the more technical approach of cracking a system physically in your hands so you can sign whatever you want anyway...

I think the aim should be less on the camera hardware attestation and more on the user. "It is signed with their key! They take responsibility for it!"

But then we need:

1. fully active and scaled public/private key encryption for all users for whatever they want to do

2. a world where people are held responsible for their actions...

I'm not sure which is more unrealistic.


I don’t disagree with including user attestation in addition to hardware attestation.

The notion of their being a “analog hole” for devices that attest that their content is real is correct on the face, but is a very flawed criticism. Right now, anybody on earth can open up an LLM and generate an image. Anybody on earth can open up Photoshop and manipulate an image. And there’s no accountability for where that content came from. But not everybody on earth is capable of projecting an image and photographing it in a way that is in distinguishable from taking a photo of reality. Especially when you’ve taken into consideration that these cameras are capturing depths of field information, location information, and other metadata.

I think it’s a mistake to demand perfection. This is about trust in media and creating foundational technologies that allow for that trust to be restored. Imagine if every camera and every piece of editing software had the ability to sign its output with a description of any mutations. That is a chain of metadata where each link in the chain can be assigned to trust score. If, an addition to technology signatures, human signatures are included, that just builds additional trust. At some point, it would be inappropriate for news or social media not to use this information when presenting content.

As others have mentioned, C2PA is a reasonable step in this direction.


3. Tech that can directly read memories from our brains.


Perhaps if it measured depth it could detect "flat surface" and flag that in the recorded data. Cameras already "know" what is near or far simply by focusing.


I wonder if a 360 degree image in addition to the 'main' photo could show that the photo was part of a real scene and not just a photo of an image? Not proof exactly but getting closer to it.


Doesn't this have an obvious edge case for every singer from now on though? If your voice is cloned before you become a singer of renown you have no protection.


Which is precisely why film producers are trying to get the power to do this to their actors.

They aren't going to use AI to have Tom Cruise in their film. He won't sign these rights away.

But they sure as hell want to have the next Tom Cruise sign those rights away as a condition for being hired to be Random Bystander #4 in a straight-to-streaming C-film.

Then, once he becomes successful and famous, they won't have to pay him a cent to keep using him, forever.

I can't wait for the future where even more of the wealth people who do work generate will be siphoned off to the owners.


Didn't SAG successfully negotiate better terms with regard to AI imitation after the recent strike?


I use yadm too but I found the differences between versions a bit tricky. The upgrade scripts didn't quite work, although I did try to jump from v1 to v3


I use yadm too but I found the changes between versions a bit tricky. The upgrade scripts didn't quite work, although I did try to jump from v1 to v3


What a poor bunch of overworked human beings, with almost no control over the product they work on. Frantically following the whims of managers, reduced to labour units in this late stage capitalist hellscape.


But well paid at least. Working at Meta seems pretty shitty except for the pay from the stories I have heard.


It probably depends what team you're on, but I would not describe it as "pretty shitty." Being oncall for a 24/7 service sucks, yeah, but for my team it is one week a quarter and I haven't had any outside-of-biz-hour alarms the last few shifts. Other than that -- my work is challenging and interesting, my colleagues are friendly and smart, and my manager is decent. Not a lot to complain about.


That's such a beautiful comment I would almost consider printing it and putting it on my wall


What about when the LLM inevitably hallucinates a plausible but incorrect answer?


To be fair, a friend I know ran into a string of similar issues when attempting to get her gender changed on her license and passport. The entire system was rife with incorrect advice from workers and broken documentation, which caused several attempts to be rejected (wasting months of time).

The bigger problem with LLMs as they currently stand is that one can easily bully them into breaking outside their normal operation parameters.


Does this issue only occur if you have billing info on file?

I'm using the free tier and have no billing info set. According to this https://github.com/netlify/ask-netlify/issues/6#issuecomment... > if you have an event that puts you over the free-tier limits, Netlify will ask you to update your billing information and add a CC

Although worryingly > We just had this happen and our site didn't stop working.

Is there any way to ensure if you hit the limit sites just stop working and you don't get billed?


I'm also interested to know this. I have a couple of static sites running on the free tier for friends/family and now I'm planning on moving them all to a VPS as soon as I can.

It is beyond ridiculous that serverless providers don't offer a way to cap spending. The idea that it might cause your site to go offline is a complete non-argument. That what I _want_ to happen. I want to be able to say sure, I'm happy to sustain 10x traffic for a few hours, and maybe 3x sustained over days, but after that take it offline. I don't want infinitely scaling infra precisely because of the infinitely scaling costs.


No, and this is by design. If you go over the limits (can also happen if a build machine times out, ask me how I know), you will be billed without any recourse. If you have no billing information and refuse to set it, at the very least they'll permanently ban you from their platform.

Which, if it remains the only consequence, seems like a blessing now.


As soon as it mentioned intelligent spiders I was thinking "Children of Time" by Adrian Tchaikovsky https://www.goodreads.com/book/show/25499718-children-of-tim...


You seem to ignore the fact that hamas was elected in 2006 and has indefinitely postponed elections since then. Given the demographics most Palestinians weren't even alive when hamas was elected.


That doesn't change the fact that it was elected in the first place, showing that at the time Gaza was not interested in peace.

My point is that Palestinians have their share of blame.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: