Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think I get it. Its underpinned by the assumption that the application doesn't need robust logic around connecting to a local proxy like it would connecting to a remote one. In this sense its going back to the days when instances had a local HAProxy running. That assumption didn't really pan out and we all decided service LBs were better but ok, sure. We can have both.

I still think describing the concept more plainly would help a lot of the confusion.



I think this is right, yes. We're not worried about the connectivity between the service and the local proxy because they're (always) cotenants of a container and using localhost to communicate.

A more general way to think about service meshes is that the network layer we code to right now is actually really primitive; its service model was fixated in the 1980s, and its programming interface hasn't much evolved from the early 1990s. We'd be happier if we could level up the whole network, so that it had QoS controls, a really expressive security model that didn't rely on magic-number ports and address ranges, and observation capabilities that communicated application-layer details and didn't just try to approximate them the way flow logs do. You can get all that stuff, internally at least, by putting all your services on the same service mesh.

Another thing to look at is Slack's Nebula, which was just released last week:

https://slack.engineering/introducing-nebula-the-open-source...

Nebula is a service mesh that runs at the IP layer (where Istio and Linkerd ride on top of HTTPS proxies, Nebula rides on top of a somewhat Wireguard-ish VPN). Slack has been using it internally for 2 years now. It's solving the same problems Linkerd is, but with a radically different implementation. You can get your laptop connected to a Nebula service mesh in ways that would be clunky to do with a Linkerd mesh.


I would say it's even a bit more pessimistic than that: it's the assumption that the application _can't be relied on_ to provide robust logic around connecting to a remote service. You can address that problem in a library or in an intermediary service of some sort, but in an organization with a heterogeneous collection of languages, versions, and stacks, the library solution becomes expensive.

I'm not sure what you mean by "instances had a local HAProxy running" but if you're thinking about a bunch of reverse proxies handling incoming requests, be aware that service meshes are handling both inbound _and_ outbound traffic to/from your service. For example, you might use a reverse proxy in front of your instance to terminate TLS, but you cannot implement something like two-way TLS authentication between your services unless you're putting something in at the client side as well.


I'm talking about using HAProxy as a local reverse proxy instead of a remote elastic load balancer. So I guess we're back to it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: