In 1991, I wanted to write about the WWW for NeXTWORLD magazine. Tim Berners-Lee replied to my email, saying he was reluctant to widely publicize that code for the web was available for free, as management at CERN had not yet agreed to release it publicly. The web grew slowly the first two years. At the time, I really didn't understand his hesitance, but in hindsight it was for the best. Two years ago, I wrote about the early days of the web on its 30th anniversary [0].
To me, Tim Berners-Lee is brilliant not just for his vision of a WorldWideWeb and its technical underpinnings, but also for an astute political streak in shepherding it to widespread adoption. I appreciate Jay Hoffmann (the article's author) for marking the significance of the anniversary.
Which, notably, is not made accessible via URL—at least not a "first-class" URL by the author/publisher; if you want to refer someone to the contents of a specific passage—or the entire book, even—then your best bet is with something like OpenLibrary, which has made it accessible in a fairly messy and roundabout way. Not a great endorsement for the vision of the Web as it was intended (vs how it is actually used).
The book itself isn't on the web (although it was authored in a web page editor, Navipress) --- I don't think TBL has ever said that _all_ information/text would be on the Web so not understanding your point?
Arguably that it's not is another nod to/acknowledgement of what Ted Nelson wanted to do w/ Xanadu but failed.
That's not the book. That's just a (scant) page about the book (with a few broken outbound links).
> The book itself isn't on the web
So you do understand what I mean.
> I don't think TBL has ever said that _all_ information/text would be on the Web
So you don't understand what TBL meant.
The Web includes everything. Whether it's "on" the Web in the sense of being readily served up by a functioning piece of software that responds to requests with the full contents or not is another matter (but it is supposed to have a URL even if the answer is "not"—as in "not currently available").
I'd thought that that aspect of the web went out the door when URL was changed from meaning:
Universal Resource Locator
to
Uniform Resource Locator
Do you have a quote where TBL indicated that _all information/text would be on the Web?
I don't see why a page indicating the existence of a physical book isn't adequate, nor why a text has to be available online when there's no workable mechanism for compensating the copyright holder --- the inability to work up such a mechanism was why Ted Nelson's Xanadu never got anywhere.
The problem with URL as Universal Resource Locator is that where a file is stored does not match to how it should be searched for (hence the original web directory pages, and the reason search engines are the primary interface for finding anything on the Web) --- moreover, few people are formally trained as librarians, and aren't inclined to name and organize and store things in formal hierarchies so as to facilitate that.
I don't understand what you're saying. Patreon, which exists and permits "stuff placed on the Web" to be accessed upon payment, either works or it doesn't. HTTP additionally has a whole range of standardized 4xx status codes related to authorization—notably 402 Payment Required, but also 401 Unauthorized and 403 Forbidden. Similarly, subscriber-only podcasts exist.
Either people are getting paid for stuff (which would necessarily imply that it's possible), or they aren't.
My point is that there's no imprimatur for placing copyright material on the Web and then automatically paying the owner of the copyright --- without that, there should be no de facto expectation that a copyrighted document exist on the Web which is what I understood you to be specifying.
For me as a kid, there was a huge gap between something that was free and something that was not. The difference between asking my parents for something and just getting (downloading) it.
I now have my own disposable income, but I still think long and hard in case I have to spend money; the actual price being secondary to the fact, that the thing is not free.
Yeah; I have the same reaction to products like Notion or Figma where someone is footing the bill to keep servers on. And if those servers ever turn off, I lose access to my stuff.
Either I'm paying for a product, and as soon as I stop paying, I've lost access to my stuff (Figma, Notion). Or the product is free, and someone is somehow intending to make money from me being their user. They'll either start charging down the line, or they're selling my data or my attention to advertisers (tiktok, twitter, etc).
I don't think I like using SAAS products because its never really free. At least not in the same way the web is free or vim is free.
It’s ok to pay money for things that are useful to you! Often most free alternatives have drawbacks or downsides of their own. Sure, there’s a chance they will eventually get shut down, but that almost never happens without warning. And it’s always a good idea to make backups of your important data for any kind of tool or service.
I think a lot of people in this community get caught up with “owning their own data” just in case, but in reality there are almost always ways to migrate your data when you need to.
> It’s ok to pay money for things that are useful to you!
I do pay money for useful software! For example, I have no problem paying for intellij. With intellij, my data isn't trapped behind my subscription. If I stop paying for intellij, the software I've written is still on my computer. It still works, and I can still edit it just fine using vs code or something.
Likewise, I don't have a problem paying for Procreate on my ipad. Now that I've paid for procreate, I own it forever. And I can backup all my procreate documents onto an external hard disk (though the process is awkward) and they're mine forever. Thats important.
I even don't mind paying for Fastmail. All my data is stored on their computers, but because its email, I can at least get all my data off fastmail's servers if I ever want to. And then I can migrate my custom domain somewhere else, and I won't have lost any of my data.
But programs like Notion are no good to me. If I do creative work in notion, as far as I know my data is trapped. Does Notion even have a data export tool? Can anything even read notion's data format? Not that I know of. Google docs has the same problem (though at least thats free for personal use). And Figma.
(I just checked - you can export notion data to HTML & PDFs. But you can't get the raw data. Presumably, notion databases and things are all lost. I'd have to copy+paste into another tool losing all my formatting and start again.)
I don't like my creative work being trapped like that. I don't want to be lose access to my own work because the only tool which edits that data gets shut down or aquihired. I don't want to pay a monthly subscription fee to continue to have access to my own work.
I bought MonoDraw years ago (an ascii art drawing program). The developer has stopped working on it, but the software still works great. If it were a web app, I'm sure the website would have stopped working or gone down a long time ago. Just like Google Wave did. Or LiveJournal. Or hundreds of other sites over the years.
> in reality there are almost always ways to migrate your data when you need to.
So long as they put up an API that lets you get your data out. And the data you can get out has the full data model (lots of sites don't provide this!). And you personally remember to back it all up before the site goes down. (Oops - I've failed at this before). Even then, you can't really interact with your data any more except through 3rd party tools.
> Likewise, I don't have a problem paying for Procreate on my ipad. Now that I've paid for procreate, I own it forever.
Bad example. You only own it as long as you're in possession of a working iPad which runs the thing. Might be a long time, but I've got lots of apps from the mac and iOS 32 bits era which don't run anymore on new hardware.
All software eventually gets discontinued or you can't run it on current systems without heroic measures. More common and more standardized formats are better but good luck extracting the contents of a 1980s vintage DOS word processing program or any of the zillions of presentation or image processing programs of the era without a huge amount of work--and maybe even then.
Hmm? You can get the raw data and data for tables too. I've done a full dump before for offline analysis. The API for getting it slightly more in native format than just CSV and Markdown works fine too.
I’m glad this ecosystem is slowly growing around Notion & their data. But any tool less popular than notion (or, less popular with developers) will not get this sort of attention. I work with a designer who keeps paying for (Indesign?) simply so she won’t lose access to some old design work she’s done.
Even with Notion, it’s obviously a degraded experience to import the data into obsidian using some motley set of extensions. Especially since obsidian isn’t designed to have feature parity with Notion. My experience as a user is straight up better with shrink wrapped software that I can but once and run indefinitely, even long after the vendor has moved on. I don’t want to play the guessing game, wondering which of the products I use will die before someone makes good 3rd party tooling. After being burned a couple times by this stuff, I really prefer not needing to worry about my work sinking below the waves. Local software gives me that power.
As you can see, the point was not "paying for things you like", if you could buy them and they're yours. The problem is paying an ongoing subscription where they can pull the plug at any time.
And owning your data and your tools isn't a theoretical or fringe concern. It is a central aspect of many people's lives, with wide practical consequences.
"And owning your data and your tools isn't a theoretical or fringe concern. It is a central aspect of many people's lives, with wide practical consequences."
Ironically, most development eschews this practice. The default approach appears to be: "Import it... if you can."
Using libraries that are open source or perpetually licensed doesn't come with that sort of platform risk. The analog there would be relying on third-party APIs.
Sure; but npm packages are much more likely to still be around in a few years than some random startup’s web app. Even the original developers can’t take their own code off npm in most cases any more. And if you really needed to replace a package you depend on, most useful packages on npm have a bunch of good-enough alternatives you can switch to with a little bit of effort.
Come to think of it, I have been bitten by something like this before. I used the (mostly) single-developer project clojure-android to build an app and had no good options when bit-rot set in.
Sure but even then, forking & fixing that one library will be a walk in the park compared to remaking Figma or Google Docs if those sites ever go down. They don't seem like remotely comparable risks.
In stark contrast to those 2 examples of Notion and Figma, consider Obsidian. With its massive OSS plugin ecosystem, it approximates their combined capabilities, but has a foundation of offline-first, local markdown files.
TBH I'm not sure; I used Notion for a few months a couple years ago but unsure about "multi-view dbs" in particular. But I will say, the breadth and power and extensibility of Obsidian w/ its massive OSS plugin ecosystem is hard to overstate. If you like a Notion feature, I'd bet heavy it's doable in Obsidian.
> Also I bucket "free" into "data harvesting", "actually gratis", or "libre".
Yes, and beware of the people who want you to ignore the second two categories. "If you aren't the customer, you're the product" is an insidious way to make you think there's no escape from being monetized, while simultaneously lying to you by saying you won't be a product if you pay using money. It's pretty universal that paying customers are a product sold to bigger customers, and I'm very wary of the motives of anyone who wants me to forget that.
Not the parent poster, but I'm assuming "actually gratis" means "free as in beer" (doesn't cost money, presumably with no data harvesting) while "libre" means "free as in speech" (open source license of some kind).
Which is why, as a kid, I was fine with piracy. I couldn't have bought Photoshop in a million years, and back then there was no free for education or whatnot.
These days, I can afford it. Hell, I even bought Winrar as a thanks for many years of using it.
Same, we had so much more access to e.g. games (on different platforms); we didn't have the disposable income for consoles but we did save up for a PC and that basically launched my career.
On the other hand, that was an early indication of the entitlement that we have; we rail against the streaming services and resort to piracy not because it's unfair that they ask for money, but because we believe we're entitled to consume content. We're afraid of missing out, too.
And this is the real difference between him and Ted Nelson.
Nelson fantasizes about being the media mogul of a publishing empire under his technology. It's ultimately a proprietary format play.
This is the significant difference. I tried to explain it to Ted, personally, in Sausalito about 10 years ago over a lunch, as politely as I could.
You need to blow it open, as much as possible and make it as easy as possible without necessitating others to opt in. Information networks can't be built by an authoritarian conquer and covet strategy.
It's been tried countless times and history has zero successful test cases to study with that strategy. From DIVX to Flexplay, dvd-d to any format war ever, crippleware rent-seeking is a death knell. It will firebomb the most successful thing into oblivion overnight.
I failed to convince him this was the key difference. Ah well.
They're successful mostly because (many) end users ignore patents; at least the ones that don't work at large companies with dedicated legal departments (and targets on their back because they have actual money).
Further, that's why Vorbis, VP9, AV1 etc, show less adoption, even though they're free, for end users the other formats have been baked into hardware that only works with the closed ecosystems and the average guy can ignore the patents and continue to work with that non-free ecosystem.
I don't know if this counts though. It's not crippleware or control, it's a pretty modest nominal tax on a physical good (between 2 and 3%) that otherwise already costs money.
You could say the market pivoted away from things covered by the law into the area of the exceptions but that's a little too neoclassical for actual human behavior - I'd need to see significant direct evidence to support the idea that a 2% cost increase was both passed on to the consumer and the consumer made significant purchasing decisions from that price signal delta.
I'm going to guess that in practice, the 2% difference was absorbed in, for example, cheaper packaging, scaled manufacturing, permitting larger margins for QoS failures or by placing on cheaper areas of the distributor's shelf space as opposed to being directly transmitted to the actual consumer price of say $9.80 versus $9.99.
Even if it was directly transmitted, the majority of consumers aren't that discretionary with such small deltas. But this is another topic entirely.
Regardless, that's the only thing I can find that disputes my initial claim.
I was thinking about this as well, but in a more broad sense: Tim was able to execute on his vision so that he get the project going and useful very fast, postponing parts of his original vision (like editing, link and page discoverability ...) for later if necessary.
and it's the same reason why i believe micropayments in general will not be the disruption that many expect. sure some people may actually make an income that way, but many others won't. (just like some people make money through ads and many don't.
I know Ted's model here is a giant royalty ponzi scheme where you distribute things in rolling waves of fractional attributions but let's do something he hates and ignore that.
Instead it's people getting paid for content in the digital era.
You have substack, bandcamp, patreon and onlyfans - a similarly spirited but more simplistic version of Ted's system. His steps are just too big.
Glad to have him around though, he's a true gem.
I have a draft book "when brilliance goes awry" about half a dozen or so of these people I knew personally. I abandoned it after about 2 years or so because I couldn't figure out how to say, talk about John Draper in a way that I'd be comfortable with him reading it (I've been off and on acquaintances with him for about 20 years) and also me being as brutally honest as I want to be.
There's also so many brilliant engineers from the homebrew days that never made it out of the shadows like Steve Inness, who sadly took his own life. It's a palpable dynamic that needs documentation from somebody less autistic than myself
I mean hell, I was sitting a few feet away from John Warnock just last month at the University of Utah who talked about Steve Jobs #1 and #2 - a dramatic task of self reflection as an example of someone who came back from the wilderness. The talk is somewhere on youtube, it was run by IEEE for the 50th anniversary. I can't find his talk right now.
but these are subscriptions, not pay-a-few-cents-per-view where the payment is so small that you should not care. it's like paying per sms. i think except maybe for high earners, people do care and prefer services that don't charge in an uncontrollable way.
pay-per-view for movies works because it contrasts with buying the movie on a dvd or going to the cinema. but the same is not true for reading webpages like news articles (which is the most obvious use for micropayments)
the idea behind micropayments is that everyone out there gets paid for every view they get to their site.
but i really can't see myself paying eg 1ct per webpage view. even if it only adds up to $1 per day thats $30 per month, just for browsing the web.
my fear is that most people will vastly overestimate the value of their content, if they are allowed to decide how much visitors should be paying. we would have to lock the price to 0.1ct in order to make webbrowsing affordable, but at that price you need 1000 viewers per day if you want to get any return for your efforts. you'll need 10-20 times more if you want this to pay your salary.
for most people it's not even worth the trouble.
payments only make sense for specialty content where i am willing to actually pay a premium to get it. which is what we have now. without a general micropayment system.
(btw: your book project does sound very interesting and i hope you may find a way to complete it, but i can see the challenge to talk about the less positive sides of people without being hurtful)
Ever wonder why we don't see as much Docbook around as we might? I mean, aside from its XML Hellscape? Whelp. Because a lot of the widgets in this "open standard" are tied to closed license tools.
Take //revisionbar -> //fo:change-bar-begin. Go ahead and try to run that through Saxon. Yeah, that's right, the Official Opentopia DocBook FO gives you an element that can only get processed into PDF via vendor FO processors. And those vendors are NOT cheap.
Crap like this fed the growth of web-based PDF engines like WeasyPrint, Vivliostyle, Paged.js, Prawn, and, on the paid side, Prince. Incidentally, making changebars with Asciidoctor and Paged is a frickin' snap, and you do it from git CLI instead of hand-coding every goddamn change bar.
(But wait! You can hire yet another closed source tool to do a docbook diff that inserts the markup that can only be interpreted by another closed source tool! SIGN ME UP)
We're not even going to touch the many, many, many XML specifications that ride on proprietary blobs in their own PIs - without which the XML is completely unparseable. Or dual mode validation with DTDs riding alongside schemas in delightfully undefined ways. Or . . or . . blabbity blabbity blah. Short version, whatever extra functionality was gained from XML publishing - and I'm not convinced there were any gains at all, even theoretical ones - was largely just not worth these sorts of traps. So today the whole ecosystem is today one that's largely based on government requirements and the starry-eyed consultants who love them. The rest of techcomm picked up lightweight markup or joined the Church of Madcap.
> In February of 1993, the University of Minnesota made an announcement. In specific commercial usage of the protocol, they would be charging licensing fees. Not large fees, and not in all cases. But, in some small way, they would be restricting access.
I remember gopher and thinking it was pretty useful and easy to use.
This decision by them seems to have been a monumental mistake.
I'm curious if anyone here knows who at Minnesota made this decision and why.
That, to me, is another interesting aspect to this story, but it is understandable people are more reluctant to talk about a failure than a success.
> In a time where we are having budgets slashed, it is impossible to justify continued (increasing) resources being allocated to Gopher development unless some good things result for the University of Minnesota. This is a fact of life.
Wow. What happened to publicly funded universities doing work to benefit all of mankind? They already had Gopher named after their team mascot, so PR value to the University would always be there.
I've seen government-funded universities trying to restrict and license their works a lot lately, and somehow assumed it was a new phenomenon. Perhaps I had an overly rosy view of academia in the past.
The funny thing is that Tim Berners-Lee used the same kind of problem to advocate for publishing the web in public domain. He realized that CERN won't give him more resources to create and maintain browsers for each platform used at CERN (CERN was fully focused planning and approval of LHC), so he decided to release libww library in public domain which allowed others outside CERN to build web browsers themselves. That's how the early browsers like ViolaWWW, MidasWWW, Lynx or Mosaic were created.
State governments continually cut budget support for tertiary education in the name of fiscal responsibility, so unfortunately many no longer have the means to do so.
As we saw with newsrooms, when faced with existential crises, even the most traditional firewalls will get scrapped in the race to find nickels in couch cushions.
I am saying that research budgets and educations budgets shouldn't be intermingled. (And more generally, that research and education institutions shouldn't be intermingled by default either.)
If they were separate institutions, then cutting funding for one doesn't would not impact the other.
It's interesting to read the first link, but with DNS in mind instead of gopher. All the resource arguments still apply, and indeed DNS is not free, but equally also not expensive.
Now granted, DNS has a scarcity component, and if it was free it would basically be consumed by bots and be useless.
So back to Gopher. Which was going to license the server software (? - it's unclear, but appears to be targeting the server), and is that not the business model Netscape adopted?
Which was then subsumed by Apache and IIS?
So I guess I'm somewhat confused as to the actual benefit of the CERN "release" - (allowing others to create servers and clients?) - although clearly it may have been instrumental in gaining mindshare.
MOUNTAIN VIEW, Calif. (December 15, 1994) -- Netscape Communications Corporation today announced the availability of the 1.0 versions of its Netscape Navigatorand Netsite server line, including the Netsite Commerce Serverwith integrated security. The availability of these open software products, announced in September, enables Netscape Communications to offer the first complete, secure client/server software system for conducting commerce and exchanging information via the Internet and private TCP/IP networks.
According to The Web Server Book, at the second International WWW Conference, held in Chicago in October 1994, a straw poll of attendees showed that “90 percent of present Webmasters used the NCSA server”
Netscape's original business model was selling Navigator.
That didn't work for various reasons, like IE being given away for free. So then they switched to server side products. But there wasn't enough depth there, and there were also free web servers, so they couldn't create enough value to both pay for server development and the browser, so they went under.
Then MS came along. The browser was undermining their paid-for product, not generating revenue. So IE "went under" (was destaffed).
Then AOL came along. For unclear reasons they bought Mozilla and paid for development for a long time, probably losing all their money. They eventually went under.
Then Apple came along. They needed a browser of their own because they didn't want to depend on IE any more. But beyond being able to browse the mobile web they didn't need much more than that. Apple has not gone under, but the web is clearly just a feature of their OS and to the extent it gets developed, it's to keep up with Chrome.
Then Google came along. They saw what a disaster zone the browser market was, plagued by non-existent or directly contradictory business models, and they feared Microsoft crushing them above all else, just like they'd done for Netscape. So they started paying browser makers to set them as the default, and wrote the Toolbar, and eventually graduated to making their own browser from scratch. AdWords is the closest thing the web ever had to an actual business model.
It's reasonable that the people funding Gopher didn't see any commercial future in it, especially as Gopher had search integrated into the protocol. Even Google tried to sell themselves to Yahoo for $1M early on, and were rebuffed, because clearly web stuff wasn't worth that much.
The brilliant piece of the web, the bit that was revolutionary, the bit that most web alternatives not only fail to implement but appear to be fundamentally unable to implement, because let's face it, this is a terrible idea. is the idea that one web document can load any other web document.
Hypercard is the common bogey man for a better web that failed, but in my mind the closest modern web alternative that failed to do this is the app store. now I know why the app model does not let you load arbitrary resources and you know why the app model does not let you load arbitrary resources, but in the right sort of light, if you squint the right way, you can see what may have been, how perhaps the exec() syscall may have taken an argument, a url, of what to exec.
This article could use some precision about exactly what rights they were giving up. Rights to the "World Wide Web" not so much. Specific copyrights and patent rights are something else. There could have been patents that were essential and might have put the World Wide Web as we know it in a deep freeze from the beginning.
It is there in the original CERN approval letter at the bottom of the article. CERN was relinquishing rights and putting into public domain the following three software artefacts and binaries.
- W3 basic ("line-mode") client
- W3 basic server
- W3 library of common code
In How the Web was Born[1] I red that Tim also considered GPL at some point as he liked the FSF's approach, but as the OP notes in the post, he decided to go with public domain so that it would not scare off some companies from using libwww code.
The whole early period of web development is quite interesting. The decision about the licensing was not the only crucial decision which made it succeed, like release of libwww, focus on super simple browser which works on any platform (line mode) or introduction of gateways ... Recently I wrote a blog post about it focusing on design of first browsers and forces which were driving it.
Btw early users of the web mostly didn't know about the original Tim's vision and the 1st web gui browser/editor prototype. See for example hn comment from someone who was using original line mode browser under a post about the first browser/editor:
The impact of AI models will be as big as the web , now it is the critical defining moment of LLMs. Are we going to allow a few big companies control our access to AI ? This can happen just like Apple App Store . We must find a way to make AI free for all
The difference to me is that the utility of cryptocurrency was always mostly hypothetical. LLMs are useful now, and can only get more so. I do agree that we're currently in a hype cycle though; I just think it's akin to the 90s dotcom bubble. That was over-hyped too, but it didn't mean the internet wasn't going to change the world.
> I do agree that we're currently in a hype cycle though; I just think it's akin to the 90s dotcom bubble.
Yes. I remember attending a professional conference around 1996 or 1997. One of the presenters spoke about the Internet and how we could supposedly use it in our work. Some people in the audience were doubtful (partly because the live demonstration didn’t go well due to problems getting a dial-up connection) and said it just seemed like a bunch of hype.
Someone in the audience responded, as I recall, “Yes, there is a lot of hype about the Internet now. But it’s clearly becoming useful, and it won’t be hype forever.”
If you made a steady monthly investment into an index of NASDAQ stocks throughout the dotcom boom time and then just held them, you would have done reasonably well today.
So arguable there was no dotcom 'bubble' in the sense that the dotcom stocks weren't overvalued as a whole. (Not all individual stocks did well. But the future is always hard to predict. And stock prices are about expected values of future prices, actually realised futures can differ.)
The dotcom bust was real though: dotcom stocks were way too cheap after the so called 'bubble' burst, in the sense that buying an index of NASDAQ stocks then and holding them until today would have given you crazy outsized returns.
> "Thirty years ago, Tim Berners-Lee and CERN gave the world a gift..."
The only reason the decision seems important now, is if you think the internet could of never been developed without the all-powerful minds of "Tim Berners-Lee and CERN"...
As computers got faster an nations and companies wanted to communicate between computers. It seems inevitable that the only way to communicate with computers is was a decentralised open and free network...
If it wasn't CERN, someone else would have built the internet...
Look at the current internet landscape- most users interact, find, and share content through the walled gardens of social networks. I don't think "decentralized open and free" networks are at all inevitable. To me those characteristics are shrinking, not growing.
Lets be glad it was someone with the foresight to imagine it as an open protocol from the beginning though. Interoperable and free to use things around it.
The fact that they didn't try to monetize the underlying tech is almost a miracle.
AI should follow the same principle. It has the potential to become as widespread as the web... And yet here we are paying a fee per N tokens to use GPT.
Img tag was important, but it doesn't mean that web was text only before the tag was introduced: people were linking to pictures in the same way how would one link to any other resource, and when you opened the link, the picture was shown in a new window.
> All documents have a specific owner, are royalty-bearing, and work through a micropayment system. Anyone can quote, transclude, or modify any amount of anything, with the payments sorting themselves out accordingly.
If you're of the right philosophical bent, that sounds familiar:
> Galambosianism is an early precursor to libertarian philosophy promoted by an aerospace engineer named Andrew J. Galambos (1924-1997) during the 1960s. He gave a series of for-pay classes starting with "V-50" ("The Theory of Volition"). Unlike other precursors to libertarianism (such as the ideas of Ayn Rand, Robert LeFevre, Albert Jay Nock, and Ludwig von Mises), Galambos' ideas have largely been thrown in the dustbin of history by his fellow libertarians.
> Galambos called himself a liberal, but in reality was philosophically somewhat closer to anarcho-capitalism. One of the core ideas of his philosophy, and the main sticking point preventing broader acceptance of it, was his belief in absolute intellectual property rights, meaning the inventor or originator of an idea should have absolute, lifelong heritable control over that idea and all the profits derived from it.
[snip]
> Other libertarians quickly found Galambosians to be obstinate cranks. Reportedly, Andrew Galambos and Ayn Rand once met and within five minutes each had declared the other insane. Also reportedly, Galambos would keep a jar or coffee can next to him when speaking in public, into which he would drop a nickel or dime any time he mentioned the name of another person, or mentioned an idea or phrase attributed to another person, to symbolize he was paying "royalties" to them for his use of their intellectual property. He went so far as to drop a nickel in "royalties" to the long-dead Thomas Paine every time he used the word "liberty", on the mistaken belief that the word was invented by Paine. Also reportedly, he was born Joseph Andrew Galambos, Jr. but legally changed his name to Andrew Joseph Galambos so he wouldn't infringe on his father's intellectual property rights.
[snip]
> Needless to say, a wiki article discussing Galambosianism should not be allowed in a free society, but it is okay only if you drop a nickel in the jar after reading this article.
I wonder if Ted Nelson ever pays royalties to Galambos. Probably not, the slacker.
But today, this is irrelevant as the "web" was stolen by blackrock and vanguard big tech companies.
It was sneaky and malicious: A "web engine" became grotesquely and absurdely massive and complex, and that includes its SDK (yeah, a c++ compiler is even worse) and only very few can actually "work". Only google and apple provide "for free" web engines: blink and google financed geeko (firefox) from google, and webkit from apple. The only way to build those is with gcc (~MIT) or clang/llvm (~apple).
And you call that digital freedom? Those are digital digital chains.
The only way out is hardcore regulation with noscript/basic html browsers and simple but able to do good enough job protocols, and that stable in time (for instance irc if there is a text chat).
"Oh no, all this complete free and open source software exists but it's complex".
Gemini exists, gopher exists, and it is completely possible to make websites with no JavaScript.
You're demonizing Firefox and MIT and calling their output digital chains. It's fair to say that is a fringe opinion.
Let's get angry that TCP is complicated, too. How many open source TCP stacks are there? And Ethernet! Only one cable standard. Who is behind it? Xerox!
> [...] blackrock and vanguard big tech companies.
What do you mean by that? Are you talking about index funds or something?
> The only way out is hardcore regulation with noscript/basic html browsers and simple but able to do good enough job protocols, and that stable in time (for instance irc if there is a text chat).
To me, Tim Berners-Lee is brilliant not just for his vision of a WorldWideWeb and its technical underpinnings, but also for an astute political streak in shepherding it to widespread adoption. I appreciate Jay Hoffmann (the article's author) for marking the significance of the anniversary.
[0] https://danielkehoe.com/posts/early-days-of-the-web-1991/