> This decision came after Illinois Secretary of State [...] discovered that Flock had allowed U.S. Customs and Border Protection to access Illinois cameras in a “pilot program” against state law, and after the RoundTable reported in June that out-of-state law enforcement agencies were able to search Flock’s data for assistance in immigration cases.
This illustrates the textbook argument for why mass surveillance is bad: these tools can quickly end up in the wrong hands.
The people pitching for said surveillance are always the wrong hands if they're from the government. "We here from the government, we're here to help" are very scary words, and be careful if you take them up on the offer
This statement from Reagan is why we’re in this mess. There’s nothing wrong having the government come help you. The problem is regulatory capture, corruption, and a population that seems ok with it. A competent government with people who care can be a powerful force in people lives, eg social security, national parks, public schools.
Programming doesn't make it easy to not miss jokes like that. I'll never forget my mum's face after I told her that I was working on a "killAllChildren" function when I was at school
I thought they have accidentally "responsibly disclosed" the vulnerability directly into a public mailing list, but the attached pdf is dated >3 months ago.
So assume it's a bit of an inaccurate phrasing.
EDIT: nope, the email itself seeks disclosure coordination etc. So yeah, oops.
Sure, but the author publishes their email address on the main dnsmasq page:
Contact.
There is a dnsmasq mailing list at http://lists.thekelleys.org.uk/mailman/listinfo/dnsmasq-discuss which should be the first location for queries, bugreports, suggestions etc. The list is mirrored, with a search facility, at https://www.mail-archive.com/dnsmasq-discuss@lists.thekelleys.org.uk/. You can contact me at simon@thekelleys.org.uk.
> The findings exposes a troubling asymmetry: at 0.1% vulnerability rates, attackers achieve an on-chain scanning profitability at a $6000 exploit value, while defenders require $60000, raising fundamental questions about whether AI agents inevitably favor exploitation over defense.
Prior to AI, outside the context of crypto, it is/was often not “worth it” to fix security holes, but rather bite the bullet and claim victimhood, sue if possible, and hide behind compliance.
If automated exploitation changes that equation, and even low-probability of success is worth trying because pentesting is not bottlenecked by meatspace, it may incentivise writing secure code, in some cases.
Perversely enough, AIs may crank out orders of magnitude more insecure code at the same time.
I hope this means fuzzing as a service becomes absolutely necessary. I think automated exploitation is a good thing for improved security overall, cracked eggs and all.
If I'm understanding the paper correctly, they're assuming that defenders are also scanning deployed contracts with the intention of ultimately reporting bug bounties. And they get the $6,000/$60,000 numbers by assuming that the bug bounty in their model is 1/10th of the exploit value.
This kind of misses the point though. In the real world engineers would use AI to audit/test the hell out of their contracts before they're even deployed. They could also probably deploy the contracts to testnet and try to actually exploit them running in the wild.
So, while this is all obviously a danger for existing contracts, it seems like it would still be a powerful tool for testing new contracts.
Did it? I didn’t see a claim that doing this work manually had a zero error rate.
Again, I would probably not do this. But let’s not pretend that non-AI release processes prevent all issues. We’re really talking different kinds of errors, and the ai driven ones tend to be obviously wrong. At least right now.
An interesting and somewhat inspiring bit of trivia from the video: the creator barely understands modern image compression techniques (from their own words), but this hasn't stopped them from coming up with that impressive result.
This illustrates the textbook argument for why mass surveillance is bad: these tools can quickly end up in the wrong hands.
Play silly games, win silly prizes.