Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Slightly off topic: I’m setting up a home server on a Mini PC that has Windows 11 Pro pre-installed. I want to attach it to my TV and play retro games as well as run home automation tasks (periodic scripts, UniFi controller, Pihole etc)

Is anyone using Proxmox on their homelabs? Would you recommend blowing away Windows and installing Proxmox and then install Windows with PCIE passthrough?



I actually use Proxmox on my main PC. Ryzen 5950x, 64GB RAM, RTX 4070, AMD 6500XT. The two GPU's are each passed to a Windows and Debian VM respectively, and each also gets a USB card passed for convenience. I run half a dozen other VM's off of it hosting various portions of the standard homelab media/automation stacks.

Anecdotally, it's a very effective setup when combined with a solid KVM. I like keeping my main Debian desktop and the hypervisor separate because it keeps me from borking my whole lab with an accidental rm -rf.

It is possible to pass all of a systems GPU's to VM's, using exclusively the web interface/shell for administration, but it can cause some headaches when there are issues unrelated to the system itself. For example, if I lose access to the hypervisor over the network, getting the system back online can be a bit of a PITA as you can no longer just plug it into a screen to update any static network configuration. My current solution to this is enabling DHCP on Proxmox and managing the IP with static mappings at the router level.

There are a few other caveats to passing all of the GPU's that I could detail further, but as a low impact setup (like running emulators on a TV) its should work fairly well. I have also found that Proxmox plays well with mini PC's. Besides the desktop, I run it on an Intel NUC as well as a Topton mini PC with a bunch of high-speed NICS as a router. I cluster them without enabling the high availability features in order to unify the control plane for the three systems into one interface. It all comes together into a pretty slick system


I did this for a while where I ran multiple VMs, some of which had PCIE passthrough for some GPUs on both Windows and Linux. Luckily my motherboard separated out IOMMU groupings to make this work for me. While you _could_ do this, you may run into issues if your IOMMU groups aren't separated enough. The biggest issue I always had was drivers always causing issues with Windows. I eventually blew the entire instance away and just run Windows.

I'd recommend a separate device if you need any access to a GPU. But I do recommend Proxmox as a homelab. I still have it running on a separate 2012 Mac Mini.


Most server motherboards these days sorry iommu acs which keys you divide iommu groups logically.


My use-case is slightly different, but I use Proxmox for my home server and would recommend it. Especially if you're familiar with linux systems or want to learn about them which I've done through the years I've been using this setup.

My server was originally a single debian installation set up to host local services for things like git. That grew into hosting a site, vpn, then some multiplayer game servers. When I reached a point where too many things were installed on single machine, I looked at vm options. I've used VMWare/VSphere professionally, but settled on Proxmox for these main reasons: easy to set up and update, easy to build/copy vms, simple way to split physical resources, monitoring of each vm, and simple backup and restores. All without any weird licensing.

That server houses 4 vms right now. That might be a bit much for your mini pc but you could do a couple. The multiplayer servers are the main hog so I isolate resources for that. The windows machine is only for development which isn't your exact use case. I can say however that I've never had issue when I need it. Only thing I can't speak for is the need for graphics passthrough.


I have run proxmox for several years, rely on it for many bits of house and network infrastructure, and recommend proxmox overall. My desktop also runs proxmox with PCIe passthrough for my "actual" desktop (but this is a different proxmox server from the primary VM and container host for infrastructure).

That said, I wouldn't mix the two use cases either initially nor over the long-term. House/network infrastructure should be on a more stable host than the retro-game console connected to your TV (IMO).

In your case, I'd recommend buying another PC (even an ancient Haswell would be fine to start) and getting experience with vanilla proxmox usage there before jumping straight into trying to run infra and MAME/retro gaming on PCIe passthrough on the same singleton box.


I’m in the middle of this. Got a Bee Link miniPC. It came with windows, licensed oddly. I’m configuring it as a home server. Current plan is to migrate my Unraid install over from the vintage server it’s currently on. Most services are run in docker. We’ll see how performance is.

ProxMox is on my list to try out. So far I’m very happy with Unraid. It makes it easy to set up network shares, find and deploy containerized services, and handles VMs if you need them. I try to avoid the VM and focus on containers because it’s more flexible resource wise.


If the pc is beefy enough (win 11 pro runs smoothly) just go with the included hyper-v. Imho you don’t get any benefits installing proxmox on bare metal in this scenario. YMMV of course


In that case, simply running Linux + KVM or VirtualBox would better


You want to use Hyper-v, you can use GPU-P(Gpu Partitoning) where hyper-v will pass through the GPU to the VM and share it, it's not some emulated adapter, it's genuinely the real GPU and runs natively and you can share it across multiple VM's and host. Linux has NOTHING that can compete with the feature.


But a Windows user, so I could be completely off base, but isn't GPU-P just VFIO under the hood? I don't know about Proxmox, but this is completely supported by OpenStack and KVM.


No, it uses the hypervisor to implement a scheduler for the GPU between the different vm's and gives them access to it. VFIO on linux requires passing the entire GPU to only a single VM and making it so the host doesn't have access.


Sorry again, is this like nVidia MIG? That seems to be the more current technology. Not sure if the host can retain a slice though.


Kind of, i'm not to familiar with the enterprise gpu sharing, but I believe that requires hardware support for it on the GPU to split up contexts for the OS. Whereas the GPU-P solution works on any GPU and is vendor agnostic as it's done on the OS/Processor level. It's really cool and I have no idea how they can ship this since it does trample on the enterprise tech pretty substantially. But I belive it's a byproduct of Microsoft wanting to compete on the datacenter level and it gives them a killer feature for Azure, and as a byproduct it entered as a semi undocumented feature into consumer hyper-v.


Thanks for the knowledge share!

MIG does need hardware suppoet, and from what I understand really expensive licensing. I don't know if AMD has a parallel technology.

That is pretty cool that it is vendor agnostic. I've found a few docs from a few years ago talking about stuff like it on Linux, but development of it seems to have stopped or just not progressed at all.

But she is they would use it in Azure, I imagine there they are using enterprise GPUs. But in Andheri scale datacenters it definitely sounds like an advantage!


It's probably overkill for what he wants though. A headless Linux server frees up the GPU for a gaming VM, and he can run containers natively for everything else.


If its just a few simple things you list, I might stick with HyperV. If you care about more sophisticated VLAN'ed networking setups I would probably go proxmox. But hardware passthru is a can of worms so understand there will be a tradeoff.


From what I know you cannot pass the primary video card to a guest, so you'd need a second video card to pass-through to the guest.

But i'd be happy to be proven wrong.


You can, but it disconnects it from the host, so you'll be headless. Which may be fine for a lot of people if you are able to ssh in and manage it that way.


This is exactly how I have mine set up aye. Proxmox on hardware, Main PC as a VM - once I got to the point where Proxmox had its web interface (and ssh) up and running I had no real reason to have a monitor plugged into the hardware OS anyway. Passed the GPU and all USB ports to the PC guest from there.

At the time I had no idea how popular it was to run this setup, I thought I was being all weird and experimental. Was surprised how smoothly everything ran (and still runs, a year and change later!)

Bonus, I was able to just move the PC to another disk when the SSD it was on was getting a bit full. Moved PC's storage onto a spinny HDD to make room to shuffle some other stuff around, then moved it to another SSD. Didn't even need to reboot the PC VM, haha.

Proxmox backup server running on my NAS handles deduplicated backups for it and other VMs too which is great.


you can pass the primary video card to a lxc guest, I used [1] successfully to do that.

[1] https://jellyfin.org/docs/general/administration/hardware-ac...


Can someone explain why it is useful to do do virtualization at all when you just want to run a small amount of things like this?

I have a ubuntu server install running on an old laptop to do very basic background jobs, backups, automation, run some containers etc. – am I missing something by not using a hypervisor? What are the benefits?


Some software really prefers to control the whole host, usually highly integrated stuff. Some examples:

- Unifi Controller installs like a half dozen dependencies to run (Mongo, Redis, etc last time I used it), much easier to isolate all that in a VM

- Home Assistant's preferable and most blessed install method is Home Assistant OS, which is an entire distribution. I've run HA in Docker myself before but the experience is like 10x better if you just let it control the OS itself

- I have Plex,Sonarr,Radarr, etc running for media - there is software called Saltbox which integrates all of these things for you so that you don't need to configure 10 different things. Makes it a breeze, but requires a specific version of Ubuntu or you're in unsupported territory (kinda defeating the purpose)

Lots of stuff you can be totally fine just using Docker or installing directly onto the host. But having the bare metal system running Proxmox from that start gives you a ton of flexibility to handle other scenarios.

Worst case you just setup a single VM & run your stuff on it if you have no need for other types of installs. Nothing lost, but you gain flexibility in the future as well as easy backups via snapshotting, etc


Easy backups via snapshotting is quick to say, but has an outsized benefit IME. My go-to approach for keeping many of my machines up to date is now scheduled apt get update; apt get upgrade and relying on scheduled backups in the unlikely event that goes awry. I don't have to worry about package interdependencies across machines.

For major upgrades, I may go a step further and do a manual snapshot before upgrading and then decide whether or not to commit (usually) or rollback (easy, when needed).

The (emotional) security provided by this is nice, as is the time-savings (after initial time expense to learn and setup the base proxmox infrastructure).


HomeAssistant also does some voodoo with Bluetooth, wifi, ipv6 and mDns for IOT devices. For this reason it seems best suited to a host machine instead of a docker container.


Something like HomeAssistant, it would be good if it had an agent to handle all the low-level networking stuff that could be run directly on the host, and then all the other stuff which doesn’t directly require that can run inside an unprivileged Docker container.


Install Linux. Don't use Windows, if you are not forced to do it


You really should have different physical hardware for such different use-cases.


Why? I find having a single capable hypervisor hosting VMs far more reliable than a bunch of smaller machines.


Maybe running a firewall on dedicated hardware so the internet doesn't drop if you reboot the hypervisor.. but even then I just live with that and run pfSense in proxmox.


Excellent point! I do run my main PiHole in a VM, but I have a second PiHole running on an RPi just in case I'm rebooting the hypervisor. DNS is the only real dependency that can cause problems for me when the hypervisor reboots for updates. I'm just running Fedora on a somewhat recent Dell rack server.

Firewall is a great example of what not to run on a vm, at least for me! I consider gateways as appliances though, and I haven't run my own router on Linux since I first got Cable internet in 2002. I remember how awesome it was compared to the weak routers available at the time!

Now I just buy a UDM Pro and forget about it!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: