Knowing you could turn on recall to spy in this way implies an individual with the technical know how to grab a freeware keylogger anyways.
Similarly with airtags, you have been able to buy cheaper cellular based GPS trackers for years prior to airtags existing.
In the airtag case, those GPS tags also do not alert the individual that there is a beacon following their person, and as such most likely go unnoticed and under reported.
There is a massive difference between switching on your new laptop and having a flaming big "look how cool recall is, do you want to switch it on? No? Are you sure" versus finding recall.ai or openrecall.
> Knowing you could turn on recall to spy in this way implies an individual with the technical know how to grab a freeware keylogger anyways.
Strange that you were able to discover this. Has anyone asked you for your research? Does knowing how to grab a freeware keylogger imply that you know how to code up a keylogger for yourself, or did your study not go that far?
You should check out the Xeon D offerings from Supermicro. They make some M-ITX motherboards with embedded Xeon D chips that sport dual 10GB Ethernet. Used one in a freenas system at one point for years
Looks like the X10SDV-TLN4F[1] has everything you mentioned besides a second m.2 slot. Although its 8 cores they are a few generations old and low power (45w TDP on x86). For true compute its not exactly fast but for something like a high performance file server in 1U connected to a disk shelf, they are really nice.
I actually do run a production edge cluster with these exact boards; they are just too old. The price/performance is so far off the mark WRT modern hardware they are simply not worth using anymore. I have started hitting CPU instruction set issues with current software due to the ancient microarchitectures in these lowend parts.
On one hand they are able to still sell them because nobody is competing, but what is the reason for the lack of competition?
I have a theory about Broadcom buying and blocking up the low end PCIe switch market to purposely hold back this segment, but I'm not sure how significant that is.
I use one of these. For a system that’s honestly mostly idle, it’s plenty for proxmox with Plex and pihole and whatever else, and it’s good on power and heat and noise.
I made the mistake of getting Seagate 10TB rock tumblers, but they’re the only source of real noise. WD would probably have been quieter.
I have this and my switches and a ham radio base station all racked up on top of the cabinets in my laundry/utility room, out of the way but not hidden away, so I can still notice if the fans are too loud or the wrong lights are blinking or not blinking or there’s weird smells etc.
"Apache Guacamole is a free and open-source, cross-platform, clientless remote desktop gateway maintained by the Apache Software Foundation. It allows users to control remote computers or virtual machines via a web browser, and allows administrators to dictate how and whether users can connect using an extensible authentication and authorization system."
> (2) If you're thinking about buying an AVP and not thinking about buying an MQ3 at 1/7 the price you're not thinking or at least you're not an technology enthusiast you're an Apple enthusiast.
I bought the Quest 1 when it was an Occulus product and stopped using it the moment they started enforcing Meta anything within the device. I could not care less if it is a 1:1 hardware equivalent as long as it has anything to do with the Meta empire. The last I checked, my Quest 1 refuses to function on my home network because of the filtering I enforce on my router...
The "technology enthusiast" crowd is highly heterogeneous
Isn't basic conditioning the point the artist is attempting to make? I interpreted the project as pointing out that social media uses fundamentally similar reward mechanisms as we use in training animals
Yes, and I'm saying I'm finding his point weak. It is a old and well know experiment and the social-media-is-a-skinnerbox talking point is as old as social media. I find the selfie angle tacked on without adding or illuminating the subject further. The work doesn't work for me. But I appreciate the cute rat photos.
It's like saying "people have been painting realistic portraits of women for centuries, Leonardo, and this one doesn't add anything or illuminate the subject of portrait-taking further. Still, I appreciate her ambiguous smile."
I'll bite. I am willing to bet the majority of people down voting you are more than capable of hosting an email server. But like me, see it as not worth the potential risks involved.
Before even considering why someone might not choose to do so, I would like to point out that selfhosting email is not even that hard to do nowadays. I spent a couple hours a few years back manually setting up a stack on a dummy domain just to see if its as hard as developer circles make it out to be. It was not. Furthermore a quick search today nets half a dozen docker containers you can spin up that claim to be one stop solutions for email. If even a fraction of them succeed in what they claim you could self host email with one command and an env file. You could even use the dockerfiles as a template to run the software on metal, its all there.
Even with this newfound knowledge, and as someone who tries to selfhost equivalents to any service I find myself using regularly, I would never attempt to host my own main emails. My bank accounts are linked to my emails, my investment accounts, my insurance, my loans, things that I am not willing to risk compromising my ability to access as the result of some sort of overly prideful sentiment.
Just because someone has the ability and knowledge to host their own email does not mean they should or would even consider it.
Those are all good reasons not to self-host, but none are good reasons to downvote someone advocating self-hosting and providing some useful info about it. I don't get the downvote-brigading here either, especially on a hacker website.
What is wrong with the tone? Is it wrong to point out that those who advocate taking care of one's own digital needs - a.k.a. self-hosting - tend to get shouted or voted down on this here Hacker News site which, if going by its name only would be just the place you'd expect to be the refuge for those who like to tinker and who value self-sufficiency?
Although its supported, its not well documented enough to be a good way to learn about cloud-init in my experience. I tried configuring a K3's cluster across three proxmox nodes via cloud-init to get some exposure to it and eventually gave up and just configured them manually
I had the same issue at the beginning, and unfortunately stopped on it for over a year: then I went back and got it and now I cannot think of provisioning a VM by hand anymore
I actually use Proxmox on my main PC. Ryzen 5950x, 64GB RAM, RTX 4070, AMD 6500XT. The two GPU's are each passed to a Windows and Debian VM respectively, and each also gets a USB card passed for convenience. I run half a dozen other VM's off of it hosting various portions of the standard homelab media/automation stacks.
Anecdotally, it's a very effective setup when combined with a solid KVM. I like keeping my main Debian desktop and the hypervisor separate because it keeps me from borking my whole lab with an accidental rm -rf.
It is possible to pass all of a systems GPU's to VM's, using exclusively the web interface/shell for administration, but it can cause some headaches when there are issues unrelated to the system itself. For example, if I lose access to the hypervisor over the network, getting the system back online can be a bit of a PITA as you can no longer just plug it into a screen to update any static network configuration. My current solution to this is enabling DHCP on Proxmox and managing the IP with static mappings at the router level.
There are a few other caveats to passing all of the GPU's that I could detail further, but as a low impact setup (like running emulators on a TV) its should work fairly well. I have also found that Proxmox plays well with mini PC's. Besides the desktop, I run it on an Intel NUC as well as a Topton mini PC with a bunch of high-speed NICS as a router. I cluster them without enabling the high availability features in order to unify the control plane for the three systems into one interface. It all comes together into a pretty slick system
Is this the reason Windows Task Manager seems to show Vmmem (WSL2) as gobbling up well more RAM then WSL seems to indicate is in use?
I have more then enough RAM on my office workstation to just accept this, but on my personal gaming computer that moonlights as a dev machine, I run into issues and have to kill WSL from time to time.
That's just one part of the issue - even after forcefully dropping Linux's caches, WSL has been unable to reclaim the memory back reliably. There has been a recent update that claims to finally fix this.
I think there is some conflict between the disk cache running inside WSL and the memory management outside. I tried turning up memory pressure in WSL but it didn't help. This does work but I have to run it manually from time to time:
No, that's because WSL (until v2/very recently) didn't properly release memory back to windows. This actually would cause docker to effectively leak memory really quickly.
Similarly with airtags, you have been able to buy cheaper cellular based GPS trackers for years prior to airtags existing.
In the airtag case, those GPS tags also do not alert the individual that there is a beacon following their person, and as such most likely go unnoticed and under reported.