I've recently started prototyping our move to k8s - and my recommendation is stay away from minikube, k3s and kind. Kind looks the best on paper. But canonical has done great with https://microk8s.io/
I'd love to hear why anyone preferes any other solution for local development/experimentation.
microk8s is really cool! We wanted kind for development of kubernetes itself and I don't think microk8s was around at the time.
One difference besides being able to build & run arbitrary Kubernetes versions is being able to run on mac, windows, and Linux instead of only where snap is supported.
We're paying more attention to local development of applications now, expect some major improvements soon :-)
That's great news. In my experience kind was a bit resource heavy - but more importantly didn't seem to have clear documentation that was geared towards local testing (for users/consumers of k8s).
I've spent 2 years now developing a pretty decent sized stack on k8s. I've used microk8s, minikube, and Docker Desktop's built-in kubernetes distro for a while. I feel like docker desktop worked the best for me.
However, I've wasted so much time over the past 2 years trying to figure out why something wasn't working, only to find out it was because there were differences between the k8s distro I was using, and our production system. Ultimately I found the best solution was deploying exactly what we run in production on some spare bare metal I had laying around (after adding a hundred gigs of RAM).
Luckily we have a production setup that is designed to run on-prem, so this was an option for me. Regardless, I think having as close to production as possible will make your life easier.
That being said, I still might try this project out.
Trying to find a dev setup that was feature complete and as similar as possible to production, while still running locally.
As a sibling comment mentions, there are a number of differences between distributions/implementations - and especially when new to k8s it's way too easy to waste time trying to figure out why something doesn't work.
I've had the same experience, absolutely love Microk8s, though I am hopeful about k3s and k3d (k3s in Docker). As of right now they "mostly" work, but unfortunately that's enough to break things.
Also, you can't beat the one-line snap install for Microk8s.
I just tried to set up the same stack of ~7 services I had running on Microk8s locally, and a few things went wrong in the process. Couldn't get it running.
My kubectl-fu is not strong enough to fix it, so for me that was a dealbreaker.
Though I am super passionate about k3s and support the hell out of everything Rancher Labs does, so by no means did it leave a bad taste in my mouth.
One caveat on my previous comment - if you use any one of these in production (I guess k3s is the most likely one there) - then I think using it for dev should be fine. The biggest issue is differences between versions - we're deploying to managed k8s in azure, and need a dev environment that works similarly.
I use a Minikube cluster w/ KVM as the driver for my self-hosted Gitlab CI/CD and it's worked flawlessly. Wonder what issues you encountered to recommend against it.
I'll just say that I found end user documentation for microk8s to be nice and friendly. And that k3s (the little I looked) felt maybe a little too much like administering and running a production k8s cluster. We're not planning to do that; what I needed was something that worked easily for prototyping and experimenting - and could be run mostly via kubectl (and/or helm) just like a managed cluster.
K3s is a lot easier to get working on rhel and fedora. Canonical tends to build things in ways that makes it barely work on Ubuntu and completely fail everywhere else. Same with lxc. I was a bit upset rhel dropped it in 8 and then I tried to get it running and saw the horror show and decided to rather look at rootfs podman.
I'd love to hear why anyone preferes any other solution for local development/experimentation.