- All work is meaningless, so I started seeking the ones that pay the most (banks, hedges, FAANGs).
- By making a lot of money early, I plan to soft retire in five years (with 35) - I will take a loooong vacation, then work as a freelancer / consultant as I please.
- Without the fear of starving or being old without cash, I will pursue stuff I like.
PM me in five years and I will tell you whether I succeeded.
Thanks. We're going to start small with just nomad, then vault, and as our needs grow we will probably adopt consul (we already use terraform so hopefully not a huge stretch) and maybe boundary.
This is thing I like about the HashiCorp tools. You don't have to eat the whole cake in a single sitting.
There are some good ansible playbooks on GitHub for nomad, consul and vault. I personally don't use vault because it's overkill for the proeuct in working on at the moment.
To avoid the pain of managing a CA and passing out certificates for TLS between services, I use a wireguard mesh and bind nomad, consul and vault to these wg interfaces. This includes all the chatter of these components, as well as the services I deploy with nomad. It's configured such that any job can join the "private" wireguard network or "public" internet gateway.
It takes a few days to set up, but it's very easy to manage.
>You will need to scratch your head a little bit to setup consul + nomad + vault + a load balancer correctly.
I've been wondering, would it make sense to try to package all that into a single, hopefully simple and easily configurable, Linux image? And if it might be, why hasn't anyone done that yet?
In my first job as a sysadmin I tried my best to support business and two things happened:
1 - My paycheck did not increase when business was successful.
2 - When something broke as a result of assuming risks to deliver stuff, I was blamed, not business.
So I realized the most rational thing I could do was to become a bureaucrat and ended up leaving.
Now I work in a place where business and IT are in the same team and have the same financial incentives. I tend to make reasonable consesions and business people actually care about security and keeping tech debt sane. Pretty cool.
corporate IT in the companies I am talking about is significantly more locked down, python is not install-able by going to https://www.python.org/ and downloading the binary there. Downloading executables is locked down.
Not really sure how this is a rebuttal, maybe what you say is true on a default windows installation, not on an IT-dept managed system.
My company's internal apps use a mix of VPNs and IP fenced load balancers. We are migrating to app proxy.
No inbound connections + access based on Azure AD identity with conditional access (restrict apps to Intune enabled corporate devices) and MFA is an absolute killer.
My only complain is that connectors are not very DevOps friendly. Cloudflare Tunnel is much better in this area.
From the architecture, it's not really clear to me why Lambdas have the 15 min limitation. It seems to me AWS could use the same infrastructure to make a product that competes with Google Cloud Run. Maybe it's a businesses thing?
I can't think of any reason outside of product positioning.
A lot of the novelty of Lambda is its identity as a function: small units of execution run on-demand. A Lambda that can run perpetually is made redundant by EC2, and the opinionated time limit informs a lot of design.
It may be product positioning, but Lambda really stems from AWS desire to do something about the dismal utilisation ratio of their most expensive bill item: Servers [0].
I speculate, 1min or 15mins workloads are optimum to schedule and run uncorrelated workloads. Any more, and it may diminish returns?
> A Lambda that can run perpetually is made redundant by EC2
Is only conceptually true outside of "EC2 Classic", because (to the best of my knowledge) every other EC2 launches into a VPC, even if it's the default one for the account per region, and even then into the default security group (and one must specify the IDs). That may sound like "yeah, yeah" but is a level of moving parts that Lambda doesn't require a consumer to dive into unless they want to control its networking settings
I would think removing the time limit on Lambda would be like printing money since I bet per second for Lambda is greater than EC2
Lambda does provide a level of convenience via abstraction that EC2 doesn't: just provide inline code, an S3 hosted zip file or, recently, an ECR image and it's off and running.
I doubt this is a difference marker for most medium to large sized customers though. Making a wrapper for invoking uploaded code is trivial and if done on EC2 doesn't come with the baggage of Lambda (cold starts, costlier expense, more challenging logging and debugging, lack of operational visibility, etc)
Fargate isn't a competitor to Cloud Run (I wish it was) because it doesn't scale to zero in between requests and scale back up again when new traffic arrives.
It does scale to zero CPU when your application isn’t serving requests. See the pricing model at https://aws.amazon.com/apprunner/pricing/ for more details. It does not scale to zero memory, however, because customers have told us that cold-start latency has been their biggest pain point with Lambda functions. App Runner containers can respond to requests in milliseconds as a result.
Most of the stuff my company is running is made up of data pipelines and machine learning pipelines. So we have a lot of infrequent jobs that don't really care about latency.
When I say "scale to zero" I mean like Cloud Run or AWS Lambda: I define it as the service automatically scaling to zero (and hence costing nothing to run) in between requests, but automatically starting up again when a new request comes in - so the request still gets served, it just suffers from a few seconds of cold-start time.
I'm pretty sure Fargate doesn't offer this. It sounds like you're talking about the ability to manually (or automatically through scripting) turn off your Fargate containers, then manually turn them back on again - but not in a way that an incoming request still gets served even though the container wasn't running when the request first arrived.
I haven't yet - my projects (all based around https://datasette.io/) need full Python support, and it looks like Cloudflare Workers still only with with JavaScript or stuff-that-compiles-to-JavaScript. I don't think I can get Datasette working via the Python-to-JavaScript route, it has too many C dependencies (like SQLite).
- All work is meaningless, so I started seeking the ones that pay the most (banks, hedges, FAANGs).
- By making a lot of money early, I plan to soft retire in five years (with 35) - I will take a loooong vacation, then work as a freelancer / consultant as I please.
- Without the fear of starving or being old without cash, I will pursue stuff I like.
PM me in five years and I will tell you whether I succeeded.