We will open source highlander eventually, but it’s not quite there yet.
We use a VIP that the NUCs share... ie; one of the three will always have a VIP, and if it dies another NUC grabs it. This is a poor man’s load balancer in that sense, because we only have the NUC hardware onsite;
I happen to agree with the views they have stated in the past. Marriage (defined as being between one man and one woman) is one of the most foundational institutions in the world and is worth defending.
My marriage is just as valid as anyone else’s. And it’s incredily offensive to suggest that my marriage is somehow an attack on marriage itself. Your view is bigotry in an old, familiar form, and can only be defended based on religion, which has no role in limiting civil rights within our secular society.
Hey this is Brian and I wrote the article. Great question.
1) MQTT -- we have a lot of cases where we want to share data between physical or "software" devices in-restaurant, without internet connections. MQTT is our primary channel for sharing messages between things, and for collecting data in general to be exfiltrated to the cloud.
2) "Brain" apps -- these are "smart" applications that collect data from MQTT topics and make decisions about what should be done for a given restaurant process. Today, the reality is we do not automatically cook anything. We do, however, have some restaurants that have more intelligent screens to help them know what to cook at any given moment based off of forecasting models that use the data we collect from many different things. These generally run at the edge to preserve their function in cases where there is high latency or bad WAN connections.
3) In the past we had a POS Server in the restaurant. It was a single point of failure for things like drive thru order taking with iPads and mobile orders. The move to K8s lets us run a microservice that can intake mobile orders on a more resilient infrastructure. To be clear, we have not made this shift just yet, but its something we are working on that will ultimately be transparent to customers but that will make their ordering experience more reliable. We have many cases like this.
Overall, our workloads aren't "heavy" which is why we did an array of three moderately sized commodity compute devices. We wanted a smart, sensible ROI on the hardware. We do have enough business-important apps that need to run in a low-latency high-availability environment to push us out of just cloud (internet dependent) and to the Edge. In the future, maybe this can change and we can use cloud only. It seems Edge is an industry trend though, with many workloads moving towards their consumers (a lot of gaming, but a lot of retail as well).
Great question and I hope this helped -- if not just let me know.
Good answer, seems we finally got to the meat of the product (no pun intended).
So the current workload doesn't do any of the fancy stuff but is just typical POS stuff with a 'smart' display for orders driven by a process monitoring a few MQTT sources.
You've just put in enough beef to allow for future expansion.
I can honestly say that answer provides more information than the whole original article.
Iot device control, such as timers for food, and cameras tracking food (to name a few). These also output streams of data that is interesting to us, such that we want to exfiltrate them up to the cloud.
I’m not a fan of hype... k8s is one of the few hyped technologies that has delivered on its promises.
Credit card transactions, mobile orders, timer synchronization, order receiving (tablets etc), iot devices (cameras, cooking devices) and other things planned for the future.
- Credit-card transactions don't have low-latency requirements, its 'nice' if they're quick, same as everything else. The bottleneck here is entirely dependent on your ISP tho. No 'edge-computing' here.
- Mobile orders. This will necessarily go thru to a central server. This is traditional client-server stuff. Again, no 'edge-compute'.
- Order receiving. Simple local data entry. Again, no 'edge-compute'.
- IoT devices. Hopefully these have local control systems without the server being in the loop. Control systems are not 'edge-compute' either.
'Edge-compute' is generation of knowledge at the edge rather than shipping raw data to a central server. This reduces required bandwidth.
What in your system takes a high rate of data and generates a low-rate of data for transfer to a server for further use. I see no analysis of raw data into a more processed form, this is simply traditional data entry and CRUD activites.
Well, it’s a 6 man team so most of our trade offs were for expedience, not the worlds greatest architecture. We cheated and sync’d data using a HA MongoDB setup amongst the cluster. RKE for K8s clustering was a life saver (nothing easier at this point on bare metal IMO), although RKE can be brittle at times.