Hacker Newsnew | past | comments | ask | show | jobs | submit | coffeesn0b's commentslogin

Is this the apocalypse?


Woot woot! (Biased author here, sorry... can’t help it)


We will open source highlander eventually, but it’s not quite there yet.

We use a VIP that the NUCs share... ie; one of the three will always have a VIP, and if it dies another NUC grabs it. This is a poor man’s load balancer in that sense, because we only have the NUC hardware onsite;

https://github.com/kubernetes/contrib/tree/master/keepalived...

We are also looking at metallb


You’re correct, training is centralized but the devices that use it to function are operated at the edge.


Of course, thats how ML works, central training of a model with use of that model at the edges.

So how often do you distribute a new model for controlling fries?


They’re really reasonable about it... you do what you need to do.

That being said, it’s really REALLY nice to mute pagerduty for a whole day every week.


[flagged]


I've always found CFA employees to be kind, courteous, and respectful.


That's irrelevant. Their work generates profits which are used to actively undermine my civil rights.


I happen to agree with the views they have stated in the past. Marriage (defined as being between one man and one woman) is one of the most foundational institutions in the world and is worth defending.


My marriage is just as valid as anyone else’s. And it’s incredily offensive to suggest that my marriage is somehow an attack on marriage itself. Your view is bigotry in an old, familiar form, and can only be defended based on religion, which has no role in limiting civil rights within our secular society.


We have frequent internet outages and still need to run compute loads at the edge.


> compute loads

What are you computing? This is what everyone is wondering.


Hey this is Brian and I wrote the article. Great question.

1) MQTT -- we have a lot of cases where we want to share data between physical or "software" devices in-restaurant, without internet connections. MQTT is our primary channel for sharing messages between things, and for collecting data in general to be exfiltrated to the cloud.

2) "Brain" apps -- these are "smart" applications that collect data from MQTT topics and make decisions about what should be done for a given restaurant process. Today, the reality is we do not automatically cook anything. We do, however, have some restaurants that have more intelligent screens to help them know what to cook at any given moment based off of forecasting models that use the data we collect from many different things. These generally run at the edge to preserve their function in cases where there is high latency or bad WAN connections.

3) In the past we had a POS Server in the restaurant. It was a single point of failure for things like drive thru order taking with iPads and mobile orders. The move to K8s lets us run a microservice that can intake mobile orders on a more resilient infrastructure. To be clear, we have not made this shift just yet, but its something we are working on that will ultimately be transparent to customers but that will make their ordering experience more reliable. We have many cases like this.

Overall, our workloads aren't "heavy" which is why we did an array of three moderately sized commodity compute devices. We wanted a smart, sensible ROI on the hardware. We do have enough business-important apps that need to run in a low-latency high-availability environment to push us out of just cloud (internet dependent) and to the Edge. In the future, maybe this can change and we can use cloud only. It seems Edge is an industry trend though, with many workloads moving towards their consumers (a lot of gaming, but a lot of retail as well).

Great question and I hope this helped -- if not just let me know.


Good answer, seems we finally got to the meat of the product (no pun intended).

So the current workload doesn't do any of the fancy stuff but is just typical POS stuff with a 'smart' display for orders driven by a process monitoring a few MQTT sources.

You've just put in enough beef to allow for future expansion.

I can honestly say that answer provides more information than the whole original article.

Thank you.


There is no real edge-compute loads here, just traditional CRUD data entry type systems.

Riding the hype-wave methinks....


Iot device control, such as timers for food, and cameras tracking food (to name a few). These also output streams of data that is interesting to us, such that we want to exfiltrate them up to the cloud.

I’m not a fan of hype... k8s is one of the few hyped technologies that has delivered on its promises.


It’s more about network outages than latency specifically... although in several remote locations latency is permanently low due to slow providers.


Credit card transactions, mobile orders, timer synchronization, order receiving (tablets etc), iot devices (cameras, cooking devices) and other things planned for the future.


Ok, lets go thru that list:

- Credit-card transactions don't have low-latency requirements, its 'nice' if they're quick, same as everything else. The bottleneck here is entirely dependent on your ISP tho. No 'edge-computing' here.

- Mobile orders. This will necessarily go thru to a central server. This is traditional client-server stuff. Again, no 'edge-compute'.

- Order receiving. Simple local data entry. Again, no 'edge-compute'.

- IoT devices. Hopefully these have local control systems without the server being in the loop. Control systems are not 'edge-compute' either.

'Edge-compute' is generation of knowledge at the edge rather than shipping raw data to a central server. This reduces required bandwidth.

What in your system takes a high rate of data and generates a low-rate of data for transfer to a server for further use. I see no analysis of raw data into a more processed form, this is simply traditional data entry and CRUD activites.


But again, latency of what?

Requirement for low-latency implies a use of data that is time-sensitive.

What time-sensitive data is there in a single chicken restaurant??


If you decide to play around with it, we use RKE to build and manage the k8s environments locally.


Well, it’s a 6 man team so most of our trade offs were for expedience, not the worlds greatest architecture. We cheated and sync’d data using a HA MongoDB setup amongst the cluster. RKE for K8s clustering was a life saver (nothing easier at this point on bare metal IMO), although RKE can be brittle at times.


Ultimately “the edge” will just do it for him :-P


https://www.youtube.com/watch?v=pboejsWb484

These got installed, but adoption was low due to cost, they were tied into the POS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: