Thousands of large Facebook groups appear to have been taken down, with many admins reporting a wave of rapid content reports from a single account targeting the group's first post. This often triggers the group name to change to "Group title pending" just before the group is removed.
I agree with you. Knowing the exact column names can speed up an attack and, in some cases, make it more feasible.
Why don’t they just request disclosure of what’s actually stored and allow renaming of the columns? It seems odd that knowing the exact column names would be necessary if the goal is simply to understand what data is being stored and its intended purpose.
LXD seems like an unusual choice when Kubernetes already has cadvisor and strong monitoring integrations. Avoiding extra agents is nice, but does this really scale better than existing solutions like Prometheus and OpenTelemetry?
What’s the advantage here beyond keeping things lightweight? Feels like this could hit limitations as complexity grows.
I chose LXD for several reasons. There is much less overhead cost when it comes to managing an LXD cluster:
- It's more vertically integrated for example networking across node is built in, you get it out of the box.
- It supports stateful workload out of the box with no fuss. Running DBs doing snapshots, deletion protection etc... is very simple with LXD.
- LXD supports running Docker inside containers which means in the future we will be enabling docker containers. Each LXD containers can be treated as a 'pod' that can run multiple docker containers inside. But it's just a simple system container you can treat like a VM.
- Working with GPUs is very simple and straight forward. This is going to be key as we start to enable AI work loads.
- LXD doesn't require a Master Node which means each instance I provision I can use it to run my work load. It also supports redundancy as I grow my cluster because it handles distribution through raft. Which means in terms of overhead it's much lower than K8s
- Overall LXD feels like a batteries included container hypervisor.
This solution doesn't replace things like prometheus. In fact LXD has native support for prometheus, we would also be able to extend the solution to pushing data to a prometheus instance or expose a /metrics endpoint for prometheus to consume.
For our MVP we just chose Elastic but it will be easy to extend to support prometheus as well. We're shipping data using open telemetry format. OpenTelemetry is a specification when we ship data we try to keep it as close to what open telemetry does as much as possible. Elastic's observability supports this out of the box.
All this solution does is it queries the underlying infrastructure metrics and ship it to a destination. The only scaling it needs to handle is ship the data and handle back-pressure incase the destination cannot handle the load. Broadway does this out of the box.
This was not an easy decision. I believe both are great products and you wouldn't go wrong either way. I was on the fence for a long time before making the decision.
I think it's a combination of a lot of things. I've been a long time Elasticsearch user, I think I used it since version 0.17.
Elastic just seems to have a lot more built in and seem to be the leader on this front when it comes to innovation. They did start the whole project and are leading when it comes to new stuff being built in. Their business / survival depends on building the best search product in whatever circumstances they find themselves in.
OpenSearch is funded by Amazon, and it's not their sole focus.
Things just feel more polished, well integrated than OpenSearch. The Kibana UI, Observability stack and their AI search stuff some of which is unique to elasticsearch and I'm sure there are more things yet to be uncovered.
Taking a long term view just seems like the offering of Elastic is more fitting to our product requirements.
That said whatever we've implemented in this blog post would also work with OpenSearch. In the future we will enable customers to bring their own timeseries DB and it would work with OpenSearch.
Same for me. I wonder if Claude is better at some languages than others, and o models are better at those weaker languages. There are some devs I know who insist Claude is garbage for coding and o3-* or o4-* are tier 1.
Thankfully, AWS provides a docker.io mirror for those who can't wait:
In the error logs, the issue was mostly related to the authentication endpoint:▪ https://auth.docker.io → "No server is available to handle this request"
After switching to the AWS mirror, everything built successfully without any issues.