This sounds interesting. Would you be able to share more information about this style of running? I'm having a hard time imagining how this plays out in real life.
Running barefoot forces you to improve your technique in line with this description. [0] There's also a sizeable market of "barefoot shoes" that's between being barefoot and the regular running shoes, with the manufacturers trying to convince us buying such shoes is the solution. The gait issue become more obvious (and painful) in barefoot shoes, but you can adjust your technique in mass-market running shoes as well.
I first found out about this back when Chris McDougall's "Born to Run" book came out. For anyone that does not know, he follows an Indian tribe in Mexico known for their running prowess, using non-traditional sandles to run in instead of the heavily padded sneakers most of us wear.
I switched briefly around that time to running in vibram five fingers, which trained me how to change my stride and stop heel striking. I no longer wear VFFs but do tend to favor lightweight, minimal heel-drop sneakers, and I still don't heel strike.
I'm not sure you can run with appropriate gait (stop heel striking) in modern mass market running shoes. The heels on many running shoes are 2 inches+ and make it just impossible to avoid heel striking without wasting a lot of motion picking your knees up.
Also, the chances of twisting your ankle when your heels are elevated that much from the road is far higher as well.
Best marathon runners can do it in "conventional" running shoes [0], but I agree it's easier to find the better technique barefoot or in barefoot shoes.
I made this change as well. Specifically, I switched from heel strike to forefoot strike, AKA “landing on the ball of my foot.” I changed shoes to zero-drop (Altras) which makes this easier to do.
This sort of automatically limits how far in front of your hips you can land your foot. But then the next step is to change posture and “lean forward” so that it feels like you’re just barely catching yourself with each foot before falling on your face.
The goal is to have your foot land directly under you, then use your quads and glutes to push your foot backward, to create or maintain your forward momentum.
An incredible undertaking! How much testing have you done with regards to harvesting a manual configuration into Ansible, creating a new machine and then applying that to see whether the machine is a functional representation of the old machine?
The reason I'm asking is because I'm interested in how much confidence could be lent to this tool with regards to more old and obscure machines that have been running for years.
I run QubesOS as my workstation, so it's been really beneficial here, because Qubes is all VMs and templates. I've been 'harvesting' one Qube VM and then building another and running the manifest on the second machine. It's been working very well to align it with the first machine. Most of my testing has been visually watching the Ansible play install/configure things that I expect to see occur.
Where it falls down:
1) Systems that are so old, the packages that it detected as being installed, are no longer 'installable' on a new system (e.g the apt repositories no longer have those packages, they just did at the time of the original install)
2) Packages that were installed not via an apt repo but, say, dpkg -i (of a .deb file). Slack Desktop is a good example. Obviously the deb is not in the harvest, so it fails there.
So, there'll always be corner cases, but assuming everything that you have installed on the old/obscure machine is still something that can be installed via the usual methods, it should be okay. (If you are running a system that is so old, its packages are no longer available upstream, it's probably time to let it go! :) )
You'll want to use --exclude-path to ignore some server-specific stuff, perhaps, such as static network configuration etc. And of course, you can also comment out whatever roles are superfluous, in the playbook, before running it.
Always use --check with Ansible first just in case.
Dagger was something I looked into two or so years ago before they got consumed by the LLM and AI agent hype, and while the promise of being able to run the exact CI workflows locally seemed excellent, it seemed that there's basically no way be a Dagger user without buying into their Dagger Cloud product.
I ended up opting for CUE and GitHub Actions, and I'm glad I did as it made everything much, much simpler.
Can you explain/link to why you can't really use this without their cloud product? I'm not seeing anything at a glance, and this looks useful for a project of mine, but I don't want to be trapped by limitations that I only find out about after putting in weeks of work
Overall I like Dagger conceptually, but I wish they'd start focusing more on API stability and documentation (tbf it's not v1.0). v0.19 broke our Dockerfile builds and I don't feel like figuring out the new syntax atm. Having to commit dev time to the upgrade treadmill to keep CI/CD working was not the dream.
re: the cloud specifically see these GitHub issues:
Basically if you want consistently fast cached builds it's a PITA and/or not possible without the cloud product, depending on how you set things up. We do run it self-hosted though, YMMV.
One thing that I liked about switching from a Docker-based solution like Dagger to Nix is that it relaxed the infrastructure requirements to getting good caching properties.
We used Dagger, and later Nix, mostly to implement various kinds of security scans on our codebases using a mix of open-source tools and clients for proprietary ones that my employer purchases. We've been using Nix for years now, and still haven't set up any of our own binary cache. But we still have mostly-cached builds thanks to the public NixOS binary cache, and we hit that relatively sparingly because we run those jobs on bare metal in self-hosted CI runners. Each scan job typically finishes in less than 15 seconds once the cache is warm, and takes up to 3 minutes when the local cache is cold (in case we build a custom dependency).
Some time in the next quarter or two I'll finish our containerization effort for this so that all the jobs on a runner will share a /nix/store and Nix daemon socket bind-mounted from the host, so we can have relatively safe "multi-tenant" runners where all jobs run under different users in rootless Podman containers while still sharing a global cache for all Nix-provided dependencies. Then we get a bit more isolation and free cleanup for all our jobs but we can still keep our pipelines running fast.
We only have a few thousand codebases, so a few big CI boxes should be fine, but if we ever want to autoscale down, it should be possible to convert such EC2 boxes into Kubernetes nodes, which would be a fun learning project for me. Maybe we could get wider sharing that way and stand up fewer runner VMs.
Somewhere on my backlog is experimenting with Cachix, so we should get per-derivation caching as well, which is finer-grained than Docker's layers.
Hi, I'm the founder of Dagger. It's not true that you can't use Dagger without our cloud offering. At the moment our only commercial product is observability for your Dagger pipelines. It's based on standard otel telemetry emitted by our open source engine. It's completely optional.
If you have questions about Dagger, I encourage you to join our Discord server, we will be happy to answer them!
> If you’ve been active in the Dagger community, this news will come as no surprise. Since we released multi-language support, we have seen a steep decline in usage of our original CUE configuration syntax, and have made it clear that feature parity with newer SDKs would not be a priority.
That is, of course, a self-fulfilling prophecy (or, perhaps, a self-inflicted wound). As soon as Dagger's "multi-language support" came out (actually a bit before), the CUE SDK was rendered abandonware. Development only happened on the new backend, and CUE support was never ported over to the new one.
Dagger founder here. We moved away from CUE because the number one complaint from our early users was having to learn CUE. The number two complaint was bugs in the language that we diligently escalated upstream, but would never get fixed, including crippling memory leaks.
We shipped multi-language support because we had no choice. It was a major engineering effort that we hadn't originally planned for, but it was painfully obvious that remaining a CUE-only platform was suicide.
I think multi-language support is a great feature, and I understand why you had to go for it. While I'm sure some people likely switched away from CUE once they had the chance because they weren't interested in working with a novel and perhaps quirky DSL, I'm also sure some stopped using the CUE SDK just because it was clear to them that it was being abandoned— I know that because I'm one of them. I'm one of the users who stopped using the CUE SDK after multi-language support came out— and it's not because I preferred using one of those other languages. That's all I'm saying.
I understand. We really did try to port the CUE SDK over to the new APIs, but there were impedance mistmatches that made it difficult to do so without major breaking changes - basically we would have needed to design a new SDK from scratch. We asked for opinions on our discord, and it felt like there weren't enough people interested to justify that kind of effort.
For a while there was activity on the #cue channel about a community SDK (that's how we got PHP, Java, Rust, Elixir and dotnet), but it didn't materialize.
It looks like you were in the minority that would have preferred to continue using the original CUE SDK - I'm sorry that we didn't find a way to continue supporting it.
> It looks like you were in the minority that would have preferred to continue using the original CUE SDK - I'm sorry that we didn't find a way to continue supporting it.
I admit that when I went to the CUE documentation (because I was learning Dagger!) and read about the idea of thinking of validation as locating both schema and configuration on an infinite type/value lattice and sort of walking down from schema to concrete configuration, I thought "holy shit, it makes perfect sense". I'd never really thought about unification in configuration languages before, and it's a really cool idea, one of those things that's simple enough to be intuitive but also really powerful. First-class deep merge support is something that I really miss in hacky configuration languages like HCL, for example. So right away I felt "these CUE guys are onto something! let's see how it pans out".
It felt painful for me to abandon that investment, as I'm sure it did for many on your team, too.
While I'm fairly happy with what I've managed to do since moving on from using Dagger, the CI space is still a mess overall, and I think it needs tools like Dagger and more. So while it will likely be some time before I reevaluate Dagger for use at my current job, I do still wish you and your team present and continued success!
What I don't get is why would someone code in the terrible GitHub actions dsl which only runs on GitHub actions and nowhere else when there are so many other options that run perfectly fine if you just run it from GitHub actions.
When I got started it was much more difficult as you had to do a lot of manual work to get things started, and you really had to believe the promises that CUE offered (which I did...), but nowadays they've made so many steps in the right direction that getting something going is far quicker!
Maybe it’s just me, but these sample workflows don’t look less complicated, just another kind of complex? If you’re already heavily using CUE in your project this lateral complexity shift might make sense, but I don’t see why I would start using it…
Like my guy 'diarrhea' already echoed: using CUE absolutely does not make sense at a small scale; just write your YAML and get on with your day. We were using it to generate dozens upon dozens of GitHub Actions workflows from what was essentially a single source of truth, and because CUE can export to JSON too then that single source of truth could then easily be leveraged to provide other input files to be used elsewhere.
To some extent yes. If all you have is 2 GitHub Actions YAML files you are not going to reap massive benefits.
I'm a big fan of CUE myself. The benefits compound as you need to output more and more artifacts (= YAML config). Think of several k8s manifests, several GitHub Actions files, e.g. for building across several combinations of OSes, settings, etc.
CUE strikes a really nice balance between being primarily data description and not a Turing-complete language (e.g. cdk8s can get arbitrarily complex and abstract), reducing boilerplate (having you spell out the common bits once only, and each non-commit bit once only) and being type-safe (validation at build/export time, with native import of Go types, JSON schema and more).
They recently added an LSP which helps close the gap to other ecosystems. For example, cdk8s being TS means it naturally has fantastic IDE support, which CUE has been lacking in. CUE error messages can also be very verbose and unhelpful.
At work, we generate a couple thousand lines of k8s YAML from ~0.1x of that in CUE. The CUE is commented liberally, and validation imported from native k8s types, and sprinkled in where needed otherwise (e.g. we know for our application the FOO setting needs to be between 5 and 10). The generated YAML is clean, without any indentation and quoting worries. We also generate YAML-in-YAML, i.e. our application takes YAML config, which itself is in an outer k8s YAML ConfigMap. YAML-in-YAML is normally an enormous pain and easy to get wrong. In CUE it's just `yaml.Marshal`.
You get a lot of benefit for a comparatively simple mental model: all your CUE files form just one large document, and for export to YAML it's merged. Any conflicting values and any missing values fail the export. That's it. The mental model of e.g. cdk8s is massively more complex and has unbounded potential for abstraction footguns (being TypeScript). Not to mention CUE is Go and shipped as a single binary; the CUE v0.15.0 you use today will still compile and work 10 years from now.
You can start very simple, with CUE looking not unlike JSON, and add CUE-specific bits from there. You can always rip out the CUE and just keep the generated YAML, or replace CUE with e.g. cdk8s. It's not a one-way door.
The cherry on top are CUE scripts/tasks. In our case we use a CUE script to split the one-large-document (10s of thousands of lines) into separate files, according to some criteria. This is all defined in CUE as well, meaning I can write ~40 lines of CUE (this has a bit of a learning curve) instead of ~200 lines of cursed, buggy bash.
I've been wanting to create a similar holiday optimizer tool myself, but what you've done is marvelous! Do you take requests for new countries? I see that you're using https://date.nager.at/ as the source and country is listed, so perhaps it's easily doable?
There was actually a really terrible brown-out by Poetry (a Python dependency management and packaging tool) where they introduced sporadic failures to people's CI/CD systems: https://github.com/python-poetry/poetry/pull/6297
Thanks for mentioning these! Do you know what are the official channels they're doing the announcements in? In the post they just mention the word "usual" with no clarification.
Q: What will happen to the existing OCI Helm charts?
A: The already packaged Helm charts will remain available at docker.io/bitnamicharts as OCI artifacts, but they will no longer receive updates. Deploying these charts will not work out-of-the-box unless you override the bundled images with valid ones.
*except for the BSI images included in the free community-tier subset.
That’s the first part, but will the charts work if I override the image name with a non-Bitnami one (e.g. docker.io/library/redis for redis)? Or do they bake in special stuff in their images that their charts rely on?
You need the images that go with the charts. They have their own config system, which usually involves elaborate shell scripts in the images that receives parameters from the chart.
> can you help me with my CV please. It's awful and I think the whole thing needs reappraising. It's also too long and ideally needs tailoring to a specific job i've found.
Then it asked me for the job role. I gave it a URL to indeed to which it came back with an entirely different job details (barista rather than technical, but weirdly in the right city). After correcting this by pasting in the job description and my CV we chatted about it and it produced a significantly better CV than I'd managed with or without friends help in the two years previously.
Honestly, the whole thing is both amazing and entirely depressing. I can _talk_ walls of semi-formed thoughts at it (he's 7 overlapping/contradictory/half-had thoughts, and here's my question in the context of the above) and 9 times out of 10 it understands what I'm actually trying to ask better than, sadly, nearly any human I've interacted with in the last 40 years. The 1 in 10 times it fails has nearly always because the demo gods got involved.