a) Shared data source; each service writes pid/state to a file in the shared data store. It could be a single directory in a single server setup or a dedicated NFS/SMB server for hundreds/thousands of nodes.
b) Pub/Sub service; Kafka, et al, in which services simply subscribe to and publish to a central channel to see everyone else.
c) Determinism; You use predictable naming/addressing and simply infer. This is tricky to scale but not impossible.
d) Any number of stand alone discovery services ala Zookeeper or Eureka. They all end up being effectively the same pub/sub model as B, just prepackaged.
e) You don't discover shit, you have a single load balanced endpoints that can scale out instances as needed behind balancer with zero knowledge required by the rest of the system.
Pick one to suit your needs. Service Discovery is not that hard and has been way over engineered.
As I was reading this, I thought to myself "How does this scale" and then I re-read the parent comment that said "If you don't have high scalability requirements, virtually anything will work."
The fact of the matter is that Kubernetes solves certain problems well but also presents other problems/challenges. For some organizations, the problems K8s solves is bigger than the problems/challenges it creates. It's all about trade offs.
Some people do want to hop on the next big thing in order to keep their imposter syndrome in check. Others know a certain technology and stick with it.
This is the comment you see from people on EKS or GKE. Many companies have compelling reasons to keep a large part, or all, of their services in-house. Nobody who actually has to install and administer K8s is on here commenting about how easy it is to run, maintain, and upgrade on their bare metal hosts. Troubleshoot, I almost forgot troubleshoot! All of those moving pieces, and something is hosed at 3am. This will be fun.
It will be great if that changes someday, and there's certainly been progress, but for places where they'd need to run it themselves, K8s is a tough proposition.
a) Shared data source; each service writes pid/state to a file in the shared data store. It could be a single directory in a single server setup or a dedicated NFS/SMB server for hundreds/thousands of nodes.
b) Pub/Sub service; Kafka, et al, in which services simply subscribe to and publish to a central channel to see everyone else.
c) Determinism; You use predictable naming/addressing and simply infer. This is tricky to scale but not impossible.
d) Any number of stand alone discovery services ala Zookeeper or Eureka. They all end up being effectively the same pub/sub model as B, just prepackaged.
e) You don't discover shit, you have a single load balanced endpoints that can scale out instances as needed behind balancer with zero knowledge required by the rest of the system.
Pick one to suit your needs. Service Discovery is not that hard and has been way over engineered.