Aristophanes was such a troll. I can only recommend reading some of his plays, like The Assemblywomen (where this word is from), The Wasps, and The Clouds. They're almost 2500 years old but they've aged incredibly well both thanks to the many amazing translators that have worked on them and because the source material is also solid satire that in many cases is still relevant today.
Plato argued that The Clouds (which is sharp satire of Socrates and his school) was in part what got Socrates convicted and killed. This is obviously debatable but Aristophanes certainly didn't self-censor or mince words.
Searchable snapshots in Elasticsearch can be backed by S3 and they perform very well. No need to store the data on hot nodes any longer than it takes for the index to do a rollover, and from then it's all S3.
What kind of storage do you have backing your Elasticsearch? And how have you configured sharding and phase rollover in your indices?
I work with a cluster that holds 500+ TB logs (where most are stored for a year and some for 5 years because of regulations) in searchable snapshots backed by a locally hosted S3 solution. I can do filtering across most of the data in less than 10 seconds.
Some especially gnarly searches may take around 60-90 seconds on the first run as the searchable snapshots are mounted and cached, but subsequent searches in the cached dataset are obviously as fast as any other search in hot data.
Obviously Elasticsearch isn't without its quirks and drawbacks, but I have yet to come across anything that performs better and is more flexible for logs — especially in terms of architectural freedom and bang-for-the-buck.
Do I read it right, that ARTEMIS required a not insignificant amount of hints in order to identify the same vulnerabilities that the human testers found? (P. 7 of the PDF.)
Given that you can't infer the error from simply looking at the signature string, I don't see how having the expected string rather than a simple "OK" or "mismatched signature" (as you get now) would make a difference?
You can save the expected string to a file, save your string to a file, and run diff on a hexdump of both. Even without hexdump, you should see the difference between "\n" and "\\n" in properly escaped output.
But the returned signed string will be an HMAC-SHA256 hash, won't it? Then there's not going to be any '\n' or '\\n's in there. Only thing you'll be able to tell is if it matches your hash or not, in which case 'OK' or 'not OK' will work just as well.
But neither does the actual server. HMAC only verifies that the message is from whoever it claims to be from and that it is intact. It won't know what you intended the body of the request to look like.
It may just be the example that's not correctly formatted, but the other (working) example does in fact escape the double quotes in the JSON. I guess, depending on how forgiving the used language is with quoting, that could also be the source of the error?
I tried just yesterday with latest firefox and fedora, screen sharing didn't work out of the box. Only screen sharing by creating a virtual display worked, not sharing the current screen nor tab.
I have one thats purveying over apparently linux, mac, chrome, firefox... when you screenshare on multiple monitors (or workspaces) the screen stops updating.
I've always seen AI as Brandolini's Law as a Service. I'm spending an unreasonable amount of time debunking false claims and crap research from colleagues who aren't experts in my field but suddenly feel like they need to give all those good ideas and solutions, that ChatGPT and friends gave them, to management. Then I suddenly have 2-4 people that demand to know why X, Y and Z are bad ideas and won't make our team more efficient or our security better.
On the other hand, here's another post by Stenberg where he announced that he has landed 22 bugfixes for issues found by AI wielded by competent hands.
Sure, in competent hands. Problem is that most people don't seem to realize that in order to use AI for something, you have to be pretty good at that thing already.
> I'm spending an unreasonable amount of time debunking false claims and crap research from colleagues who aren't experts in my field
Same. It's become quite common now to have someone post "I asked ChatGPT and it said this" along with a completely nonsense solution. Like, not even something that's partially correct. Half of the time it's just a flat out lie.
Some of them will even try to implement their nonsense solution, and then I get a ticket to fix the problem they created.
I'm sure that person then goes on to tell their friends how ChatGPT gives them superpowers and has made them an expert over night.
Plato argued that The Clouds (which is sharp satire of Socrates and his school) was in part what got Socrates convicted and killed. This is obviously debatable but Aristophanes certainly didn't self-censor or mince words.
reply