The issue may be that your filters act as ORs whereas someone might interpret them as ANDs. If I filter for Social+Kids and Culinary, the CBD/space cake ones come up.
Love the site and the idea, but struggling to understand the criteria needed for a +EV bet. Would one site need to have a positive moneyline bet on team A, then another site have a positive moneyline bet on team B... assuming team A & B are playing each other?
Basic idea is to have a system call that allows library writers to get the bounds of a pointer. This way they can ensure they're not writing too much data to a location.
Another idea I've implemented in userspace is to create an allocator that allocates a page (via mmap) then set protections on the page before and after. The pointer returned aligns the end at the next page. If a write goes beyond the end of the pointer, it bumps into the protected page, and causes a fault. Then you can handle this fault, and detect an overflow.
A even more strict version of this is to add protection to the page the allocated pointer is assigned to. On _every_ write you get a fault, and can check that it's not out-of-bounds.
All of these methods are slow-as-hell, but detect any memory issues. While slow, they are faster than valgrind (not badmouthing it, it's an amazing tool!). So the recommendation is to use it in testing and CI/CD pipelines to detect issues, then switch to a real allocator for production.
Only if the stride is small enough to not skip over the guard page, surely? Unless you're setting the entire address space to protected, for any given base pointer BP there's a resulting address BP[offset] that lands on an unprotected page.
It might be interesting to expose an instruction that restricts the offset of a memory operation to e.g. 12 bits (masking off higher bits, or using a small immediate) to provide a guarantee that a BP accessed through such an instruction cannot skip a guard page; but that would of course only apply to small arrays, and the compiler would have to carry that metadata through the compilation.
I don't think this is true... It's more nuanced than this. Developers are much more skeptical of a "benefits" pitch, and care more about a "features" (and limitations as mentioned above) pitch. However, if you're just pitching a developer, you're probably only pitching the user, not the buyer. The buyer VERY much cares about the benefits, otherwise they're not buying. The buyer won't using a single feature, so that type of marketing is lost on them.
This is the challenge: the right message (features or benefits) to the right person (user or buyer) AND at the right time!
Centralizing logs can be a HUGE help... You just have to use a tool that doesn't cost what Splunk charges. At the risk of over-selling, log-store.com caps charges at $4k/month! That's still a lot of money, but way less than what Splunk and other SaaS providers charge!
I just think it's pretty foolish. All the resources you own are out at the edge of you infrastructure. You might as well just drop application logs right there and leave them there, because pushing out your search predicate to all the relevant machines is going to exploit all that CPU and IO bandwidth you already paid for.
>If instead of detecting the smoke (e2e) you try to monitor all the data like pressure in gas pipes, all the pipes fittings, how many lighters are in the building and where, etc, you will wake up every night 10 times for nothing
Without this you'll always be awoken by smoke, and at that point it's too late... there's already a fire. However, if you can monitor other things (gas pipes, lighters, etc), you might be able to remediate a problem before it starts smoking or burning.
There's no one-size-fits-all. Start with some obvious failure (e2e tests/checks; aka smoke), and when you root-cause, add in additional checks to hopefully catch things before the smoke. However, this also requires you update (or remove) these additional checks as your software and infra change... that's what most people forget, and then alert fatigue sets in, and everything becomes a non-alert.
> this also requires you update (or remove) these additional checks as your software and infra change... that's what most people forget, and then alert fatigue sets in, and everything becomes a non-alert
This is what I forgot to mention, monitoring is a process, not a tool you put in place and forget about. And I agree that this is the main cause monitoring goes to shit. But if you think about it a little deeper you arrive at the requirement to have monitoring done by somebody who understands the business and the infra/software stack and has been at that company a few years at least. While in reality monitoring is mostly an afterthought cost center and employees are rotated far too often. It's not a glamorous position either, I don't think I once met somebody who wants to work there.
> when you root-cause, add in additional checks to hopefully catch things before the smoke.
No, what you should do is update the software to handle or avoid the problem gracefully. Keep the smoke detectors, but don’t keep adding tiny things to monitor and keep in sync with your ever-changing software.
You just need a tool that's adaptable and will let you parse-as-you-search. Without a schema, your logs can change whenever, and you can still easily derive value from them.
If you don't know that your service is down, then you don't know where and what to look for. for monitoring, you need to be looking for a specific parameter or string. if that changes, its very difficult to generate an automated alert.
Fair enough... if you're monitoring "response_time", and a developer changes that field to "time_taken_to_send_the_bits" you'll probably have a tough time monitoring the service. However, if the dev communicates that the value has changed, with the right tool it isn't hard to have something that covers both fields.
100% but in the real world its a bit hard to coordinate.
Ideally you'd have a schema that you agree company wide. Before an app is deployed it has to pass the schema test. At least that would cover most of the basic checks.
but, For most small places, logs are perfectly fine for monitoring, as you imply
I'd argue it _is_ solved... store your logs in S3. At ~$0.02/GB you can store a _lot_ of logs for like $20. The problem is that _most_ solutions (Honeycomb included) are SaaS-based solutions, and so they have to charge a margin on top of whatever provider is charging them.
You just need a tool (like log-store.com) that can search through logs when they're stored in S3!
At the small volume of data where you could "just put stuff in S3 and search it later", the cost of tracing with a SaaS vendor is also very small. Far smaller than the developer time needed to set up something else.
Depends on how much you value not being vendor locked in. I'll spend a week of dev time in order to save a month or two in migration pain later down the line when my vendor jacks up cost. See Datadog as an example.
You can have your cake and eat it too with OpenTelemetry (see my original post). I don't see how that has to do with the cost of tracing being a factor in inhibiting its adoption though? I agree that vendor neutral instrumentation is worth it.
> ...the cost of tracing with a SaaS vendor is also very small. Far smaller than the developer time needed to set up something else.
I'm mainly responding to this point regarding developer time outweighing the cost of just going with SaaS vendor that provides that capability. I don't think we disagree with each other though
>At the small volume of data where you could "just put stuff in S3 and search it later"
I was arguing exactly the opposite... leverage S3's near-infinite storage so you can store whatever you'd like. Searching through it _can_ be fast with the right tool.
I spent a lot of years traveling around the world fixing problem for major companies... logs aren't enough. Not only do you have the issue of auditors who are scared you might put things in logs that shouldn't be there, but there are security / access problems and the very basic problem of "did we log the thing we now need?" Not to say that logs can't be helpful, but they generally aren't enough on their own.