vulnerable to remote code execution from
systems on the same network segment
Isn't almost every laptop these days autoconnecting to known network names like "Starbucks" etc, because the user used it once in the past?
That would mean that every FreeBSD laptop in proximity of an attacker is vulnerable, right? Since the attacker could just create a hotspot with the SSID "Starbucks" on their laptop and the victim's laptop will connect to it automatically.
As far as I know, access points only identify via their SSID. Which is a string like "Starbucks". So there is no way to tell if it is the real Starbucks WiFi or a hotspot some dude started on their laptop.
There is nothing wrong with using public networks. It's not 2010 anymore. Your operating system is expected to be fully secure[1] even when malicious actors are present in your local network.
[1] except availability, we still can't get it right in setups used by regular people.
And when you connect to a non-public WiFi for the first time - how do you make sure it is the WiFi you think it is and not some dude who spun up a hotspot on their laptop?
Why does it matter? I mean I guess it did in this case but that is considered a top priority bug and quickly fixed.
I guess my point is the way the internet works is that your traffic goes through a number of unknown and possibly hostile actors on it's way to the final destination. Having a hostile actor presenting a spoofed wifi access point should not affect your security stance in any way. Either the connection works and you have the access you wanted or it does not. If you used secure protocols they are just as secure and if you used insecure protocols they are just as insecure.
Now having said that I will contradict myself, we are used to having our first hop be a high security trusted domain and tend to be a little sloppy there even when it is not. but still in general it does not matter. A secure connection is still a secure connection.
Hmm. Are you sure that your stack wouldn't accept these discovery packets until after you've successfully authenticated (which is what those chains are for) ?
Take eduroam, which is presumably the world's largest federated WiFi network. A random 20 year old studying Geology at Uni in Sydney, Australia will have eduroam configured on their devices, because duh, that's how WiFi works. But, that also works in Cambridge, England, or Paris, France or New York, USA or basically anywhere their peers would be because common
sense - why not have a single network?
But this means their device actively tries to connect to anything named "eduroam". Yes it is expecting to eventually connect to Sydney to authenticate, but meanwhile how sure are you that it ignores everything it gets from the network even these low-level discovery packets?
As someone using Linux to build web applications, I wonder what about the Apple ecosystem could make it worth to have such a Damocles’ sword hanging over me my whole life.
Am I missing something? My current perspective is that not only am I free of all the hassle that comes with building for a closed ecosystem, such as managing a developer account and using proprietary tools, it also comes with much harder distribution. I can put up a website with no wait time and everybody on planet earth can use it right away. So much nicer than having to go through all the hoops and limitations of an app store.
Honest question: Am I missing something? What would I get in return if I invested all the work to build for iOS or Mac?
Plenty of things do work better as a native application. Packaging is a pain across the board nowadays. Apple is pretty good, you pay a yearly fee if you want your executable signed and notorized, but they make it very hard to run without that (for the lay person). Windows can run apps without them being signed but it gives you hell and the signing process is awful and expensive. Linux can be a packaging nightmare.
And that website is hosted somewhere, you’re using several layers of network providers, the registrar has control over your domain, the copper in the ground most likely has an easement controlling access to it so your internet provider literally can just cut off access to you whenever they want, if you publish your apps to a registry the registry controls your apps as well.
There are so many companies that control access to every part of your life. Your argument is meaningless because it applies to _everything_.
A trustless society is not one that anyone should want to be a part of. Regulations exist for a reason.
Not wanting centralization under one company does not equal advocating for "trustless society".
All the things you mentioned (registrars, ISPs, registries, etc) have multiple alternative providers you can choose from. Get cut off from GCP, move to AWS. Get banned in Germany, VPS in Sweden. Domain registration revoked, get another domain.
Lose your Apple ID, and you're locked out of the entire Apple ecosystem, permanently, period.
Even if a US federal court ordered that you could never again legally access the internet, that would only be valid within the US, and you could legally and freely access it by going to any other country.
So in fact, rather than everything being equivalent to Apple's singular control, almost nothing is equivalent (really, only another company with a similarly closed ecosystem).
If aws decided to block your access to their ecosystem you would lose so so so much more than Apple blocking your access to theirs. If the US decided what you said, t1 networks would restrict your access across much of the planet.
Your logic makes no sense since you can easily switch to Google or whatever other smartphone providers there are (China has a bunch).
But of course those providers can also cut you off, so what I said still applies.
First off, AWS cutting off your AWS account does not block you from visiting other websites that use AWS, it just means you can't use AWS itself as a customer. Apple's ecosystem OTOH means that OP's issue with iCloud disabled their account globally across all Apple services, not just within iCloud itself (and in fact, to further illustrate the difference, losing access to your AWS console account doesn't cut off your account for Amazon.com shopping).
> Your logic makes no sense since you can easily switch to Google or whatever other smartphone providers there are (China has a bunch).
The person above was asking about why they *as a developer* would want to risk their time and effort developing for iOS. Any work developing for iOS in e.g. switft or objective-c, is not portable for other platforms like Android. If they lose their Apple account, any time they spent developing for iOS-specific frameworks is totally wasted, is their point.
> If the US decided what you said, t1 networks would restrict your access across much of the planet.
No offense, but you have no clue what you're talking about. There are in fact court orders where internet access is restricted as part of criminal sentencing. Here's a quick example guide [1]. No part of that involves network providers cutting you off.
How on earth do you imagine a "t1 network" provider would determine that a person using their network from the UK is actually a person from the US with a court order against using their network? And to be clear, the court orders don't compel ISPs to restrict access, or attempt to enforce blocks like you are suggesting.
If you're full in Apple ecosystem, like my GF, you get:
- Shared clipboard across devices
- Shared documents
- Shared browser
- Shared passwords
- Free, quality office suite
- Interoperable devices (use iPhone as camera on Mac, for example)
- Payments across different devices (use clock to pay, for example, shared with your iPhone)
All of this with just one account without any third-party service.
And billion of things more, probably, I'm not a full Apple head.
In the rare case (maybe once per month or so) where that happens, I start a script on my laptop that starts a webapp both the phone and the laptop can open in their browser and send text to each other.
The overhead of starting it and typing "laptop.tekmol" into the browser on both machines is only a few seconds.
That seems mich saner to me than to constantly have some interaction between the two devices going on.
The standard argument here is that the maintainers of the core technology are likely to do a better job of hosting it because they have deeper understanding of how it all works.
Hosting is a commodity. Runtimes are too. In this case, the strategy is to make a better runtime, attract developers, and eventually give them a super easy way to run their project in the cloud. Eg: bun deploy, which is a reserved no op command. I really like Buns DX.
Well, if they suddenly changed the license, we'd get a new Redis --> Valkey situation.
Or even more recently, look at minio no longer maintaining their core open source project!
I mean if you're getting X number of users per day and you don't need to pay for bandwidth or anything, there's gotta be SOME way to monetize down the line.
If your userbase or the current CEO likes it or not.
No, but faced with either a loss or a modest return, they'll take the modest return (unless it's more beneficial to not come tax season). Unicorns are called unicorns for a reason.
I commend you for your imagination. Can I ask how you crafted the object to match the dimensions? I’m brand new to 3D printing and currently climbing the learning curve of printing itself but will want to start learning about doing my own modeling soon.
The only criticism I’d make is that patching drywall is dead simple and cheap and so your solution seems possibly a bit overengineered (and, while I’m at it, that Andreesen’s observation is both facile and meaningless and is probably a reflection more of the bids Marc Andreesen’s house manager gets than anything insightful about labor costs in America).
The hole was of simple enough shape that I could just design it manually. I used SCAD, which is kind of a programming language supported by some tools that can convert it to STL.
I've personally used the Lidar sensors on my iPhone with an app like Polycam to some degree of success. I was doing a scan of a massive oak tree in my backyard to plan out the treehouse tab location and associated treehouse for my kids but fell flat after the model was created (Sketchup is truly enshittified). I'd imagine a similar process for creating the "fill" for the void in the wall.
Thanks, appreciate the recommendation! Polycam looks pretty pricy - is the subscription necessary for casual scanning, or does the free version work? I assume I would need at least the basic subscription.
I’m happy to pay for software, but I really don’t care for subscriptions. (Why no, holding back the tide is not going well at all, why do you ask?)
Welll, if you were good at drywall already, drywall patches would be faster and better. But if you are good at printing and scanning, and you enjoy that process, then it’s fine.
The challenge with the example is that “success” is personal preference. With plenty of examples, the success criteria are external.
The biggest actual problem would be in a fire, the PLA will burn and let the fire into the wall cavity, where drywall would maintain a barrier for much longer - that is why we have drywall in the first place, it is a decent fire barrier.
I would simply do the normal thing of covering the hole with drywall patch screen, covering that over with drywall joint compound, letting it dry, sanding, and painting. This is an under $50 trip to Lowes, and certainly cheaper than a flatscreen TV.
Cosmetically it's probably fine. The downsides all have to do with predictability and the ability to reason about what the wall is made of in the future.
A person who goes to eg hang a picture frame or shelf there will encounter a different material with different load bearing properties than expected. Pushing into the center of that area with EG a drill bit will not have the same physical response or give, and depending on how it was braced/integrated with the surrounding wall, the patch itself may be pushed or pulled out of place. Similar for anyone that leans on that area if it's at such a height.
The solution I was taught as a child is to saw the hole square, put a section of 2×4 behind it spanning the hole, held in place with a drywall screw through the drywall on each side of the hole, cut a square chunk of drywall small enough to fit in the hole, put a drywall screw through the middle of it into the 2×4, and tape, mud, sand, and paint.
I suspect that this procedure is faster and easier than taking a 3-D scan of the hole, 3-D printing a PLA patch, and gluing it in, but it does require most of an hour and the appropriate materials on hand.
It's a solution. There are better solutions, and far worse solutions (anyone who has worked to get a deposit back on a college rental has probably developed a few of their own), and most of them are all still fine because drywall isn't (shouldn't be) structural.
Crucially, even if you are completely unwilling to take a stab at a fix yourself, hiring a local handyman to patch a hole via some good enough technique should still be far cheaper in most places than buying a nice new TV.
But nothing is gonna ever beat buying a 2nd-hand framed picture or plaque or movie poster or grabbing a flyer from the junkmail on your porch and tacking it over the hole... And if you're determined to fix holes with a TV, you can probably find one used for about as cheap / free as any of the other choices. Which is what makes this such a stupid example - the cost of TVs, like framed images or furniture, spans from $0 to "as much as you're willing to pay". Hiring someone can also be arbitrarily expensive, but can by definition never be 0. So the comparison is rhetorical trickery and demonstrates nothing.
...other than, apparently, Andreessen's dissatisfaction with paying tradespeople.
It works so :shrug: I did the same to replace a part of a door frame I had to remove to make space for a washing machine 4 mm too wide. Nobody sell 400 mm of door frame so i just copied the frame shape, printed in 3 parts, and that was it. Filament color matched the frame one so I didn't have to paint.
Can't we build a social network with a simple protocol:
1: Each user has a private key that they use to sign their messages.
2: Each user keeps a list of instances who announced that one of their members follows them. When the user posts something, they broadcast the post to those instances.
Shouldn't this be enough?
It could all be url based. One user, one url.
When Sue wants to read Joes latest posts, she sends this request:
someserver.com/joedoe?action=latest_posts
When Sue wants to follow Joe, she sends this request:
> Can't we build a social network with a simple protocol:
> 1: Each user has a private key that they use to sign their messages.
> 2: Each user keeps a list of instances who announced that one of their members follows them. When the user posts something, they broadcast the post to those instances.
> Shouldn't this be enough?
> It could all be url based. One user, one url.
I might be misremembering how it works, but this sounds conceptually similar to how Ghost (the blog platform) works after their recent 6.0 update. They now support federation, posting on Bluesky and Mastodon, etc.
For a small network and low "follower" counts, yes.
But the moment you start scaling to potentially millions of posters each with a disjoint set of millions of followers the M:M connections for broadcast become problematic. The result of a chatty enough group would look identical to a DDOS to many/most of the nodes.
Taylor Swift is the problem. In terms of the system design and architecture, it's an interview question for a distributed systems engineers. You've got a superstar user, with 89 million followers, how do you scale every aspect of your system to handle when she posts? Naturally you're object and say that Taylor Swift isn't going to moving to TekMolTwitter, but pwg said it won't work after a certain size and you said why not, and the short answer is that it doesn't scale past N users, and you can just cheat and say N is higher than you want to care about. We could do a bit of back of the envelope math to see that notifying 15 million users will saturate the gigabit link on you're $5 VPS if each notification packet is 64 bytes, and then design all sorts of queues and caches and redis and and and. It's a fun interview question (and practical problem for Twitter/X) but at the end of the day, if you believe in it, just go build it and get all of your friends and family to join TekMolTwitter (or Mastodon). It's entirely within your capabilities in 2025 to just go out and make something like that, so the thing is, if this is a something that you believe in you can just go do it. No one's stopping you.
That falls apart as soon as one single node is a bad actor and starts sending out DDOS floods.
To add one simple, fundamental objection that scuppers your whole plan: Who allocates usernames? What happens if two instances have two seperate joesmiths?
Because the technical aspect of building the software like is the most fun and nerd–sniping, but perhaps the least important part in the process of building an audience and encouraging people to adopt it.
I'm not sure requestcatcher is a good one, it's just the first one that came up when I googled. But I guess there are many such services, or one could also use some link shortener service with public logs.
You can easily generate a number of random images with ImageMagick and serve these as part of the babbled text. And you could even add text onto these images so image analyzers with OCR will have "fun" too.
Example code:
for c in aqua blue green yellow ; do
for w in hello world huba hop ; do
magick -size 1024x768 xc:$c -gravity center -annotate 0 $w /tmp/$w-$c.jpeg
done
done
Do this in a loop for all colors known to the web and for a number of words from a text corpus, and voila, ... ;-)
No OS vendor wants you to do that, unless you're using a desktop, and then Google wants you to use Chrome. They all want a 30% cut of revenue and/or platform lock-in. They'll rely on dark patterns and nerfing features to push you to their app stores.
Similarly, software vendors want you to use apps for the same reason you don't want to use them. They'll rely on dark patterns to herd you to their native apps.
These two desires influence whether it's viable to use the web instead of apps. I think we need legislation in this area, apps should be secondary to the web services they rely on, and companies should not be allowed to purposely make their websites worse in order to get you on their apps.
Looks to me that the browser version requires the targeted website to be iframed into the malicious site for this to work, which is mitigated significantly by the fact that many sites today—and certainly the most security-sensitive ones—restrict where they can be iframed via security headers. Allowing your site to be loaded in an iframe elsewhere is already a security risk, and even the most basic scans will tell you you're vulnerable to clickjacking if you do not set those headers.
I'm all for a lightweight approach to software development. But I don't understand the concept here.
Looking at the first example:
First I had to switch it from TS to JS. As I don't consider something that needs compilation before it runs to be lightweight.
Then, the first line is:
import {html, css, LitElement} from 'lit';
What is this? This is not a valid import. At least not in the browser. Is the example something that you have to compile on the server to make it run in the browser?
And when I use the "download" button on the playground version of the first example, I get a "package.json" which defines dependencies. That is also certainly not something a browser can handle.
So do I assume correctly that I need to set up a webserver, a dependency manager, and a serverside runtime to use these "light weight" components?
Or am I missing something? What would be the minimal amount of steps to save the example and actually have it run in the browser?
I guess for most people the standard is to install things from NPM which explains the format of the documentation. If you want to do something completely raw, you can replace 'lit' with something like this:
I estimate the vast majority of "web projects" begin with npm installing something of some sort, yes. React is dominating the web development space (judging from the average "popular web stack 2025" search result), and it and a significant portion of the competing platforms start with installing some dependencies with npm (or yarn, or what have you). Especially projects that compete in the same space as Lit.
That isn't a criticism of projects that don't use npm, and it doesn't make them less valid, but it makes sense for the documentation to match the average developer's experience.
That would mean that every FreeBSD laptop in proximity of an attacker is vulnerable, right? Since the attacker could just create a hotspot with the SSID "Starbucks" on their laptop and the victim's laptop will connect to it automatically.
reply