Hacker Newsnew | past | comments | ask | show | jobs | submit | linguae's favoriteslogin

Some 20 years ago I started a job at Google in Mountain View, and they were paying for a rental car, so Enterprise sent a driver to pick me up to do the paperwork. On the way I was chatting with him, telling him how amazing life at Google was, all the restaurants and the stocked kitchens and massage rooms on every floor of every building etc etc. He said "Do you know what this campus used to be before Google?" I said "Yeah, they told us at the orientation, it was SGI." The driver said, "Yes, and ten years ago it was exactly like that at SGI, too. I was an engineer there."

Yes, companies were lazy and greedy even way back when. But there are a number of facts that come into play when it comes to UI being much shittier today:

1. Personal computers before the 21st century were really kind of shit. Let alone mobile devices.

2. Software was largely a product that people paid for. It even came in boxes.

3. Software vendors were usually in a highly competitive environment. They had to deliver value for money if they didn't want to get eaten alive.

This meant that the software had to both work on the limited resources of 1990s shitty computers—limited storage, limited speed, limited display colors and resolution, etc.—and be useful to the end user. So companies were kept a lot more honest in terms of UI design. Circumstances forced them to deliver functional, efficient UIs. These days, our computers are fairly powerful and companies are in the business of selling services (or eyeballs to advertisers) rather than software. The user-facing software itself is a loss leader, and if making it a shitty Electron app, or desktop-mobile "convergence", helps save development costs, companies will do it.


I am trying to design a language based on Ian Piumarta's metaprogrammable object model [1] (i.e. a Smalltalk-like where two methods are defined "a priori" in the entire system, everything else is late bound and can be changed at runtime — believe it or not, more flexible than any Lisp) and I was trying to figure out how to implement the basic stuff like number addition in the language, when nothing has been defined yet.

Piumarta shows that with his model any function/closure/method can in theory and very easily be written in any language, so at first I was thinking of having tagged asm blocks:

    Number + other = {.asm. 
        mov rbx, [self]
        mov rcx, [other]
        add rbx, rcx
        mov [self], rbx
    }
    assert: { 5+6 } equals: 11
But then I would have to write an assembler, in the bootstrapping language (currently C which I've grown to dislike). So I thought about Forth:

     Number + other = {.forth. 
        ( self other -- self )
        @ over @ + !
    }
Which seems so elegant, portable, the bootstrapping language can be Forth (the Forth interpreter itself written in asm), and I don't have to write an x86_64 assembler in C. Win win win.

And during my research I came across this, running Pascal in Forth. Time is a circle and computer science keeps reinventing itself :-)

I like the opinion of the author that a great platform is one that lets you choose the perfect language to the problem at hand. My (ideal) language is fully late bound, based on message passing, which is extremely flexible, but not ideal for number crunching. Is this much of a problem, if a user can drop down to FORTRAN in the hot loops? I feel that is a much better idea than creating a multi-paradigm language that does everything but the kitchen sink (cough C++ and Rust), and is either mediocre like the former, or very complex like the latter.

IMO, we need simpler languages, based on the next level of abstraction — communication and not computation.

1: https://www.piumarta.com/software/id-objmodel/

See also COLA: https://www.piumarta.com/software/cola/ based on this model, but still too advanced for my brain to digest.

---

If anyone reading is familiar with Piumarta's COLA/object model and/or very late bound minimal systems, please drop me a line, I would be happy to pick your brain.


Early in my career, was at GE in scientific/engineering software. The unit shrank, and I got laid off.

While still on my accrued vacation, sent some resumes and in two weeks went on seven interviews and got five offers. Took the best offer, again in scientific/engineering software, with a nice raise and the next year another nice raise. Soon I was making in annual salary six times what a new, high end Camaro cost.

My wife was in her Ph.D. program in essentially mathematical sociology, and my career was getting much better.

I went for a Ph.D., and got it in some applied math, stochastic optimal control with some algorithms, software, etc.

My wife also got her Ph.D.

That was the good news.

After my Ph.D., my career was totally shot. I was nearly unemployable at anything except academics which I did not want -- I wanted make money, support my wife and myself, buy a house, have and support kids, etc. The academics paid less than I was making before my Ph.D.

Also my marriage was ruined: The stress of her Ph.D. work threw my wife (Valedictorian, PBK, Summa Cum Laude, Woodrow Wilson) into a clinical depression. She never recovered, and her body was found floating in a lake near her family farm where she was visiting trying to recover.

Finally I ended up at the IBM Watson lab in artificial intelligence. I invented some algorithms, wrote software, worked with high end customers, published some papers, and then IBM lost $16 billion in three years, went from 405,000 employees down to 209,000, and the Watson lab went from 4500 down to 1500 with 500 of those temporary, and I was out of work. The guy who walked me out the door was immediately demoted out of management. The guy two levels up, with a corner office, 55 people, a budget, a secretary, was reorganized to have one more level of management between himself and the CEO, given a six month performance plan, and then demoted out of management.

Actually at IBM, due to the costs of commuting, housing, etc., I lost money. IBM never paid me enough to live and commute, certainly not nearly enough to buy a house and support a family. I saved money even in graduate school; at IBM I lost.

Out of IBM, I was absolutely, totally, permanently unemployable for, as far as I could tell, anything at all, anything, at least anything that would pay enough to live and commute to work. I sent 1000 resume copies and got back silence or nothing. Period. I'm a native born US citizen and have held security clearances as high as Secret. I've never been arrested or charged with crime except for minor traffic violations. I've never been in court. Never used illegal drugs or made illegal use of legal drugs. Never been intoxicated. Am in good health. I have proven high aptitude, interest, and accomplishments in STEM fields. I've written a lot of significant software. Yet, I was treated like I had a felony conviction.

But I could still do applied math and write software. I was good at several programming languages, TCP/IP, lots of applications, etc. And I'd had enough experience in business to see how it worked.

So, I thought of a problem that maybe nearly every Internet user would like to have solved, derived some math for the first good solution, drew out an architecture, fast, reliable, and scalable, for the software and server farm to present the solution to users via a Web site, and started writing the software, on Windows in .NET, ASP.NET, ADO.NET, etc. with Visual Basic .NET.

As of last Friday, I got all the planned software running.

So, I have a few, small revisions, and then will load some initial data, give a critical review, do alpha and beta tests, load some more initial data, plug together a server, get Windows Server and SQL Server from the Microsoft BizSpark program, go live, get publicity, run ads, and hopefully get users and revenue.

Advice: ASAP do not be an employee. Instead, start, own, and run a business.

Technology can be a big advantage, but for technical topics, except for just a hobby, say, mathematical physics, learn what you need when you need it for your business and otherwise strongly minimize the effort you expend on learning, say, C, C++, Java, JavaScript, Jquery, .NET, Web frameworks, C#, Python, Ruby, etc.

For the business, get some barriers to entry.

One of the very best approaches to business is just something with a strong geographical barrier to entry. So, if work hard and smart, actually can do okay, say, support family, get kids through college, have a good retirement, by doing well running a few fast food restaurants, a few gas and convenience stores, having lots of crews mowing grass, running a big truck, little truck business, say, in auto parts, plumbing supplies, electrical supplies, other industrial supplies, running a good, local building materials company, being a manufacturer's representative, running several pizza shops, a popular Italian red sauce, family restaurant, an auto repair shop, etc.

With such a business, when the kids are old enough, can also get the wife and kids involved, which can solve lots of serious problems and be just terrific for the family. The business education and career start the kids get helping in the family business easily can be much better than any Harvard or Stanford MBA.

A simple and not wrong way to look at STEM field Ph.D. degrees is as a US Federal Government supported labor supply for highly speculative, leading edge parts of US national security and not so much for anything else. Indeed, when my career was doing well, mostly I was around DC working on US DoD projects.

Mostly the US commercial world just deeply, profoundly, bitterly hates and despises the high end STEM work done mostly for US national security.

Only for a short time just before I went for a Ph.D. did I have money enough to buy a house, but then I didn't. Since then I've never had money enough to buy a house and, thus, haven't. And, after our Ph.D. degrees, my wife and I were never in a position to have kids, so I never had any.

I owe my brother's widow a major chunk of change.

If my startup works, then I'll be able to pay back my brother's widow plus a lot, buy a house, etc.

Mostly I recommend, stay the heck out of technology.

With some high irony, there's a movie Stand and Deliver about teaching calculus to high school students in a poor, Hispanic area of Los Angeles. The idea is that calculus can be their ticket out of poverty.

Some of the evidence the teacher gives for the value of calculus is to tour some Los Angeles area aero-space firms -- right, US DoD again.

At one point, a girl in the calculus class is torn between working hard on calculus or working in the successful, family Mexican restaurant of her parents.

So, the calculus teacher goes to the restaurant, talks to the owners and parents of the girl, and claims that calculus will help their daughter.

I know calculus, advanced calculus, mathematical analysis well beyond, and many important applications. I've studied calculus, taught it, applied it, and published original research in it. I know calculus.

On the claim of the calculus teacher, that is, calculus compared with the restaurant, I call BS. Total upchuckable, delusional, wacko BS.

For that girl, on average, working in the successful, family Mexican restaurant run by her parents and, thus, learning the family business, was by far, a wide margin, much, much better for her education, career, and life than anything reasonable from calculus.

The girl's parents were fully correct. The calculus teacher was nuts.

In particular, the US Federal Government doesn't grant H1B visas for workers in successful, family Mexican restaurants in Los Angeles but does strongly fund US college education in STEM fields.

In a STEM field, are essentially working in a market run by the US DoD and funded by the US Congress mostly only for US national security. So, it's very much not a free market. Indeed, for a while the US NSF had economists analyzing how many immigrants the US should fund in STEM field graduate programs to keep down the labor cost to the US DoD. It's a managed market, a rigged game.

For a career, you are better off running a family restaurant or any of a long list of other ordinary business opportunities.

But, if you are in technology, then maybe you can do a project, sell it off for a nice bundle, enough for you and your family to be fixed for life -- if so, then go for it.


There is a huge problem of cognitive dissonance among liberals in the USA, and the decompensation is finally reaching a critical stage.

Most liberals seem to think the Democratic Party is theirs, the natural opponent of the conservative Republicans, and the chief obstacle to the victory of their obviously correct policies is the collection of ninnies, twits, cowards and sellouts in their own ranks.

Liberals have to realize there is one conservative party, the Democrats, and one corporate party, the Republicans, an ersatz synthetic entity Frankenstein'd from the corpse of the GOP in the late 1960s.

We used to have two largely conservative parties with significant liberal wings, because the parties were mostly regional rather than driven by ideology.

Then Democratic President Johnson went way off the reservation and got historic civil rights legislation passed with the help of a bipartisan liberal coalition in Congress.

By 1968 a lot of racist southern Democrats thought POC needed to be put back in their place, and Republican nominee Nixon more or less openly invited them to join with him. And thus one of the most unpopular politicians in American history got elected president.

This left POC and civil rights supporters with nowhere to go but the Democratic party.

The key to understanding the present moment is that the Democratic leadership has always resented having to fake empathy for their liberal wing. They'd rather go back to open conservativism, but they know they have no chance of winning elections without liberal votes and money.

Liberals also have no chance to win elections on their own because the USA is essentially a conservative country. There is no natural liberal majority or coalition here. Every election of a Democratic president in the last 50 years has been due to exogenous circumstances.

If liberals want to make progress in the USA, they have to start setting realistic goals. Stop kowtowing to the conservative Democratic leadership. Start developing alternative funding streams like AOC has done. Instead of aiming for congressional majorities and the White House, focus on being spoilers in Congress, like the Greens do sometimes in Europe.

The unconscious cognitive dissonance among liberals makes them look weak and hypocritical, and has earned them the contempt even of potential allies. They need to face reality and build a new coalition from scratch by setting achievable goals and delivering on them.


I witnessed all of this since I started using Macs around '84/'85 and programming them around '89. I'm still in mourning about:

* Since Classic MacOS (OS 9 and below) didn't have a command line, it had GUIs for tweaking system settings. Better yet, it had a budget for preventing user interface issues in the first place. The user experience on Classic MacOS was simply better than anything we have today, on any platform (including iOS - and yes I realize this is subjective). The flip side is that the platform evolved faster until the late 2000s because developers could tinker more freely. Since the vast majority of users are not programmers, I don't think this was a win. To me, something priceless was lost, that may never be regained even with the incubator of the web pushing the envelope.

* I often wish that Apple had made a Linux compatibility layer. That entire ecosystem of software is simply not in the Mac fanbase's radar. This isn't such a huge issue now with containerization, but set everything back perhaps 10-20 years. Apple did little to improve NeXT (to make it something more like BeOS, or the Amiga). We really needed an advanced, modern platform like Copland or A/UX like the article said. But in the end, Steve Jobs knew that didn't really matter to like 99% of users, and he was probably right. Still, I'm in that lucky 1% that sees the crushing burden of console tool incompatibilities and an utter lack of progress in UNIX since the mid 90s.

* Much of the macOS GUI runs in a custom Apple layer above FreeBSD (rather than using something like X11). I'm not really convinced that the windowing system is that optimized, because it used to use a representation similar to PDF. So for example, I saw weird artifacts and screen redraws back when I was doing Carbon/Cocoa game programming, especially around the time OpenGL was taking off. Quartz is powerful but I wouldn't say it's performant. A 350 MHz blue & white iMac running OS X had roughly the same Finder speed as an 8 MHz Mac Plus running System 7 or a 33 MHz 386DX running Windows 95. Does anyone know if the windowing system is open source?

I could go on, in deeper detail, but it's futile. I think that's what I truly miss most about Classic MacOS. If you ever watch a show like Halt and Catch Fire, there was a feeling back then that regular folks could write a desktop publishing application or a system extension (heck whole games like Lunatic Fringe ran in a screensaver) and you could get Apple's attention and they might even buy you out. But today it's all locked down, we're all just users and consumers.

I still love the Mac I guess, and always come back to it after using the various runner ups. But I can't get over this feeling that it stopped evolving sometime just after OS X came out, almost 20 years ago. There is this gaping void where a far-reaching, visionary GUI running on top of a truly modern architecture should be. All we have now is a sea of loose approximations of what that could be. I wish I knew how to articulate this better. Sorry about that.


FunSearch is more along the lines of how I wanted AI to evolve over the last 20 years or so, after reading Genetic Programming III by John Koza:

https://www.amazon.com/Genetic-Programming-III-Darwinian-Inv...

I wanted to use genetic algorithms (GAs) to come up with random programs run against unit tests that specify expected behavior. It sounds like they are doing something similar, finding potential solutions with neural nets (NNs)/LLMs and grading them against an "evaluator" (wish they added more details about how it works).

What the article didn't mention is that above a certain level of complexity, this method begins to pull away from human supervisors to create and verify programs faster than we can review them. When they were playing with Lisp GAs back in the 1990s on Beowulf clusters, they found that the technique works extremely well, but it's difficult to tune GA parameters to evolve the best solutions reliably in the fastest time. So volume III was about re-running those experiments multiple times on clusters about 1000 times faster in the 2000s, to find correlations between parameters and outcomes. Something similar was also needed to understand how tuning NN parameters affects outcomes, but I haven't seen a good paper on whether that relationship is understood any better today.

Also GPU/SIMD hardware isn't good for GAs, since video cards are designed to run one wide algorithm instead of thousands or millions of narrow ones with subtle differences like on a cluster of CPUs. So I feel that progress on that front has been hindered for about 25 years, since I first started looking at programming FPGAs to run thousands of MIPS cores (probably ARM or RISC-V today). In other words, the perpetual AI winter we've been in for 50 years is more about poor hardware decisions and socioeconomic factors than technical challenges with the algorithms.

So I'm certain now that some combination of these old approaches will deliver AGI within 10 years. I'm just frustrated with myself that I never got to participate, since I spent all of those years writing CRUD apps or otherwise hustling in the struggle to make rent, with nothing to show for it except a roof over my head. And I'm disappointed in the wealthy for hoarding their money and not seeing the potential of the countless millions of other people as smart as they are who are trapped in wage slavery. IMHO this is the great problem of our time (explained by the pumping gas scene in Fight Club), although since AGI is the last problem in computer science, we might even see wealth inequality defeated sometime in the 2030s. Either that or we become Borg!


The Software Industry needs to take notes from this. They are the ones who have beaten the fun out of Programming and turned it into a drudgery/chore and have caused many Programmers who loved their craft to quit the field entirely. It is possible to meet customer/business needs and still create/maintain a work environment where the programming activity itself is fun for the developers. The way to do it is to abandon Taylorism Management and approach Software Development as a "Human-first" activity akin to that of a poet/painter/artist. Promote Autonomy, challenges compatible with competence, encourage novelty/learning/training and focus less on schedules, micromanagement and detailed processes.

Here's something that should be shocking:

At universities all around the world you'll sit a course called "Research Methods". It's a bit of statistics, philosophy of science, hypothesis formulation, significance testing, understanding epistemology, quality, quantity, bias.....

I've a fairly good overview of it, because I've taught it at least 10 semesters.

One of the things baked into every research methods course is search. Sometimes you learn the interfaces to specific tools for searching papers. But most of it is what you describe; boolean operators, regular expressions, sorting and filtering...

The students are still told to use Google and that this works with Google. But this information hasn't been useful for almost 5 years now.

For 5 years we've been training bachelors, masters and PhD students to go out there and use tools and techniques that are almost completely irrelevant. The primary official tool at the foundation of all academic research is broken.

Almost no research methods professors I know at any university have though to train students to deal with advertising, spam, AI clutter, disinformation - to deal with the reality of Google as it actually is.

For 5 years we've been misleading students. Because we got so hung-up on a monopoly, we've allowed a single corporation to fuck the whole of Western academic research - because year after year I definitely see poorer results.

One day I hope the world is able to look back at the colossal cost of BigTech to culture.


Indeed! Software development is both knowledge, and creative work, but the industry treats us like replaceable cogs in an assembly line, which is NOT knowledge or creative work. We even utilize work tracking systems made FOR assembly line work, like kanban.

We KNOW that using your brain like that leads to mental exhaustion, in exactly the same way we know a tradey kneeling while they do carpet all workweek will have fucked up knees in twenty years. We also have clear scientific study that you physically cannot do knowledge work effectively for 40 hours a week, and actually start LOSING productive knowledge work performance after about 36 hours a week.

The way that I describe it to people is imagine you really like Sudoku puzzles, but now you have been contracted to do the hardest possible Sudoku puzzles you are capable of for 40 hours a week. Sure monday might be a blast, but by thursday your brain HURTS, and is physically tired. The human brain chews through A LOT of energy, and puts out A LOT of metabolites while doing hard thinking stuff, like knowledge work. Then you take your weekend where you desperately try and catch up on the household stuff you've been putting off because you get off work and FEEL dumb, because your brain desperately wants to turn off, so you scramble to get some of it done, and then it's monday again.

Meanwhile, the whole time that you're struggling with using something that isn't meant to be used 8 straight hours a day (unlike our heart or leg muscles which ARE optimized around constant usage), you're basically being gaslit by the whole system. "This task is a medium" except it wasn't groomed properly and of course there's twice as much work, but you aren't allowed to modify the ticket or change how you are working on things because a guy who spends all day putting numbers into a premade Excel sheet tells you that your "predictability" is going down, or that your "throughput" is inconsistent, as if there even SHOULD be consistency in a software project that does very different things and systems in different stages of the project, and after you spent 16 years learning ironclad math rules and 4 years learning Computer Science at your school of choice, you go into the field, and find that ANY bug is possible in modern computing, and every bug WILL be insane and flow through thirteen different abstraction layers and whisper demonic thoughts into your ear and now you get fucking PTSD whenever your mom asks you "how could this bug happen in my iPhone" and you're like "hey man IDK probably the bluetooth stack corrupted the vibration motor controller and now your phone will play only country music whenever you get a text from your brother in law", or your main app crashes because there's a damn bug in uWSGI where if you use any other C based code, it will inevitably fucking SEGFAULT because uWSGI goes out of it's way to dealloc during shutdown and manages to dealloc things that don't belong to it and the group who builds uWSGI has ignored the bug for a decade, and nobody fucking gets it because in other knowledge based work fields, nobody scoffs when you say "no you cannot use chemistry to turn lead into gold", and you don't find that occasionally your simple adding NaOH to water actually results in an acidic solution somehow because a completely disconnected reaction happened in the other room where someone used the wrong brand of sulphuric acid in a reaction and now the entire lab is cursed...

Fuck I still have like 35 more years of this until I can retire and I'm borderline useless at anything else


Since you were at Apple during what is arguably it's most interesting period software wise, what is your take on the present state of computing? How do you feel about the way Unix has overtaken everything and hasn't budged? What do you make of the period at Apple that gave rise to things like Hypercard, SK8, Dylan, your own NewtonScript, OpenDoc, and a host of other promising technologies that were axed?

Lisps are "wizard" languages: the runtime semantic is kept close to the syntax, which also means that you can rapidly extend the syntax to solve a problem. This quality is true of Forth as well, and shares some energy with APL and its "one symbol for one behavior" flavor. With respect to their metaprogramming, the syntactical approach hands you a great foot-gun in that you can design syntax that is very confusing and specific to your project, which nobody else will be able to ramp up on.

But Algols, including Python, are "bureaucrat" languages: rather than condensing the syntax to be the exact specification of the program, they define a right way to form the expression, and then the little man inside the compiler rubber stamps it when you press the run button. In other words, they favor defining a semantics and then adding a syntax around that, which means that they need more syntax, it's harder to explain the behavior of a syntactical construction, and it's harder to extend to have new syntax. But by being consistent in a certain form-filling way, they enable a team to collaborate and hand off relatively more code.

IMHO, a perfectly reasonable approach I'm exploring now for personal work is to have a Lisp(or something that comes close enough in size, dynamic behavior, and convenience, like Lua) targeting a Forth. The Forth is there to be a stack machine with a REPL. You can extend the Forth upwards a little bit because it can metaprogram, or downwards to access the machine. It is better for development than a generic bytecode VM because it ships in a bootstrappable form, with everything you need for debugging and extension - it is there to be the layer that talks to the machine, as directly as possible, so nothing is hidden behind a specialized protocol. And you can use the Lisp to be the compiler, to add the rubber-stamping semantics, work through resource tracking issues, do garbage collection and complicated string parsing, and generate "dumb" Forth code where it's called for. That creates a nice mixture of legibility and configurability, where you can address the software in a nuts-and-bolts way or with "I want an algorithm generating this functionality".


That's an excellent question. Does HN count?

I've thought a bit about intentional communities --- communes, utopian towns, and the like. The thought occurred some years back that amongst the most successful intentional communities are college towns. These are, hands down, some of the best places to live, and certainly on a per-population basis, in the US and Canada, based on a wide range of measures (though housing costs tend to be higher than surrounding areas).

There are a slew of smaller, non-dominant, and often quite small towns to be found around the world, though the US might be a good exemplar, whose central focus is often a university or college. Some public, some private (though virtually all benefit by public financing of research or student aid / loans).

These virtually always contrast sharply with surrounding towns, even for relatively small schools.

As to what makes these tick ... I don't have any solid evidence, but I've a few theories:

- Many of these schools were either formed or saw a sharp growth following the Sputnik scare and emphasis on higher education in the 1950s. See particularly California's Master Plan for Higher Education: <https://en.wikipedia.org/wiki/California_Master_Plan_for_Hig...>, or the UK's "Green Book": <https://www.educationengland.org.uk/documents/greenbook1941/...>

- There are a number of associated populations for the institutions, with widely varying residency periods. Students pass through in 2-8 years typically (net of transfers, drop outs, extended undergraduate programmes, a/k/a "five year" and "six year" plans, and graduate / professional programmes). Faculty tend to remain much longer, often much of their professional career (40+ years). Alumni may settle in the region (though most do not). And there is the "town" (vs. "gown") component, which may be sympathetic, adversarial, or a mix of both --- residents of the community who are not directly affiliated with the university. (Instances of town-gown conflict, including actual armed battle and shooting wars, date back to mediaeval times, e.g. the St. Scholastica Day riot of 10 Feb 1355.)

- The school itself has a central organising principle and mission, which many other intentional communities lack.

- The school has associations with other institutions, organisations, and agencies, some of higher learning, many not, and tends to form strong relationships with government, business, cultural, and religious sectors.

- Since the 19th century, official government recognition of the significance of both higher education and research has resulted in an increasing degree of official sanction and financial support, initially the German Humboldtian model, technical schools (e.g., M.I.T., founded in 1861 in large part to support the U.S. Navy's newfound interest in steam propulsion), land grant universities (organised in the US under national acts of 1862 & 1890), and the modern research university (largely spawned by the Manhattan Project and Vannevar Bush's Science, The Endless Frontier (1945) and formation of the US National Science Foundation, and widely emulated in other countries). In the UK there is a distinction made between the Ancient Universities (Oxford, Cambridge, St. Andrews, Glasgow, Aberdeen, Edinburgh, Dublin), the Red Brick Universities, chartered in the 19th century, the Plate Glass Universities, chartered between 1963 & 1992, and ... whatever comes after. See: <https://www.ukuni.net/articles/types-uk-universities>.

Note that universities themselves don't necessarily make money directly (through tuition), though some are extraordinarily wealthy (e.g., Harvard (~$50 billion), Yale (~$40 billion), Stanford (~$38 billion), Princeton (~$35 billion), see <https://en.wikipedia.org/wiki/List_of_colleges_and_universit...>). Those funds tend to come from grants (both government and privately-funded research), alumni donations, and increasingly technology licensing. In the case of Stanford, real estate is a massive contributor. Schools often also benefit from tax breaks and other legislative relief and exemptions.

So, you say, that's really interesting, dred, but how do you translate that to online communities, especially those for which locality and location are not central elements, as they are for brick-and-mortar institutions?

I don't know, though ... I've been pondering just that for a decade or so.

The insight does suggest a few solution-shaped objects and/or characteristics, however:

- A key failing of venerable fora is that the membership often becomes exceedingly stale. Not only do new members fail to arrive, but the more interesting and dynamic members of the old guard often leave as both the noise floor rises and the clue ceiling drops. Reward for participation simply decreases. Universities subvert this by pumping fresh students through. I suspect HN's YC affiliation and fresh founder classes in part aids HN in this regard.

- A forum is almost certainly not a freestanding enterprise but an adjunct to another institution or set of institutions. Again, HN serves, but does not profit, YC.

- Universities are mission rather than profit driven, and both teaching and research are a key element of that mission. This ... plays poorly with the notion of a VC-funded online community start-up. Ezra Klein in a podcast on media earlier this year noted that a key challenge in organising new ventures is that the profit motive and VC / investor interests tend to conflict strongly with journalism's prerogatives.[1]

- Several of the most successful previous online communities formed either directly through or closely affiliated with educational institutions. The Internet itself, email, and Usenet directly, Facebook originated on the campus (and with the student body) of the most selective-admission university in the world, and I'd argue that Slashdot's early tech-centric membership was at least strongly academic-adjacent.

- Universities are focused not only on the present moment, that is, streams, but on accumulated wisdom and knowledge. Here, HN is less a model than, say, Wikipedia and the Wikimedia foundation, in which something of a community forms through the editor community which creates (and fights over) the informational resources being created. Wikipedia doesn't quite have a social network, though various discussion pages and sections approach this.

- On the "small" bit, there's both a selective-admissions and graduation element that academia shows. That is, you don't just let anybody in, and, after they've "completed the course", they're graduated and moved on, with the exception, again, of faculty and staff. Just how that translates to an online community I'm not entirely sure.

- Another element of the "small" bit is that universities are organised: into colleges (that is, interest areas), departments (specific faculty), courses (specific topics of study or interest) and sections, that is, specific groups or meetings of students for lecture and/or discussion. Individual class size is a key dynamic, and much of the experience of the past 75 years or so shows the challenges of scaling lectures and the profoundly different characteristics of a small seminar (say, 5--15 students), a modest upper-division class (25--30), and moderate-to-large lectures (50 -- 1,000 or more students). Strong interactivity is sharply curtailed above about 15 students, and the options for interactivity above about 50--100 are near nil. Choosing how groups are organised, who's permitted in, and what size limits exist, as well as communications between various divisions (sections, courses, departments, colleges, universities) all come into play, as I see it.

And then there's politics. One of the notorious elements of universities is how various divisions rival amongst one another, gatekeep, define what is in (or out) of a specific discipline's remit, resist challenging new concepts, and form cliques and fads ... just like any human domain, only more so. I have a nagging suspicion that online communities might in fact have similar tendencies, and that these would also have to be subverted somewhat to avoid pathological development.

There are a whole slew of other factors --- techical capabilities, UI/UX, online abuse, legal issues, privacy and identity, spam, propaganda, surveillance, censorship, etc. So many dumb ways to die.

________________________________

Notes:

1. "How the $500 Billion Attention Industry Really Works" (14 Feb 2023), interviewing Tim Hwang. Specifically: "If you’re able to aggregate a lot of attention online, we just have this almost religious faith that there’s just some way that you’ve got to be able to turn this into money. You will become a Google. You will become a Facebook.... [T]he flip side of that [is] that if you come to a V.C. and you say, I want to do a subscription business model, they’ll say, well, I don’t know — we don’t have a whole lot of examples of that really blowing up, so why don’t you just do advertising?" <https://www.nytimes.com/2023/02/14/podcasts/transcript-ezra-...> Which is to say: unless you're planning a pure-play advertising monetisation model, which is to say, the Sidam Touch (advertising turns everything to shit), you won't get funding.


“If you pay a man a salary for doing research, he and you will want to have something to point to at the end of the year to show that the money has not been wasted. In promising work of the highest class, however, results do not come in this regular fashion, in fact years may pass without any tangible result being obtained, and the position of the paid worker would be very embarrassing and he would naturally take to work on a lower, or at any rate a different plane where he could be sure of getting year by year tangible results which would justify his salary. The position is this: You want one kind of research, but, if you pay a man to do it, it will drive him to research of a different kind. The only thing to do is to pay him for doing something else and give him enough leisure to do research for the love of it.

-- Attributed to J.J. Thomson (although I've not been able to turn up a definitive citation -- anyone know where it comes from?)

Does make me think some people have been pondering this for a long while though.


> Loyalty needs to go two ways

This is one of the more perplexing elements of both working as an IC and a manager. I'm constantly battling the line between treating the relationship as purely transactional and one where work is more meaningful than that. The problem is that whenever you bring other things in - "loyalty", "passion" etc etc you are at very high risk that these are circumstantially dependent on the coincidental alignment of the employee's interests and that of the organisation. The minute that changes, you will find you now have an employee whose motivation is at odds with the interests of the larger organisation and all kinds of dysfunction stems from that. The only true stable basis for alignment is that the organisation needs work done and the employee needs money. It's great if there is more than that, but the minute anybody relies on it they are essentially rolling a dice with some odds it turns against them.


It can be done. I did it. In my mid 40s I developed a chronic illness that rendered me unable to work for a few years. It bankrupted me and sank my career before I found the right doctor and started rebuilding. I had to more or less start over from nothing.

The life I've had since then is not the one I thought I was going to have. For one thing, I have a lot less money than I thought I was going to have. My professional career went in a very different direction from what I expected. My hobbies changed drastically. My circle of friends changed, too.

Still, in many ways I think the life I have now is better than the one I was building before. I miss some things, but I'm happier with this one than the one I had before.

So don't give up. Your best days may well be ahead of you.

Get some professional help. It sounds like you're going to need to travel to do it, but try. A good therapist will teach you skills you can use to help yourself. Of course, it's then up to you to actually use them.

Until then, train your attention to focus on things that help you instead of things that hurt you. It's hard to change what you habitually think about, but it can be done. Our minds want to wander back to familiar thoughts, especially ones with a lot of emotional power, but you can persist. You can keep turning your attention to more constructive things, and it will gradually get easier to do. Just don't give up when it doesn't get instantly better. It takes time and persistence.

Telling yourself things like "I'm past my prime", "It's too late," "I let life pass me by" can't help you, but they can hurt you. So use your attention differently. Focus on what's in right front of you, not on what's behind you. Focus on what you can do, not on what you can't do. Measure your life by what you think is worth doing for its own sake, not by comparing yourself to imaginary outcomes.

Choose things to do that you respect, and do them because you respect them. Don't worry too much about the outcomes; those are mostly out of your hands, anyway. Just keep your attention on things that you respect and believe in.

When you do, sometimes what you do will help other people. Sometimes those people will want to return the favor. Sometimes those returned favors will turn into friendships. Sometimes your friends will help you.

It worked for me. It might work for you. I wish you the best of luck.


I think this is a fair assessment. I agree that Smalltalk is by far not a complete solution to the problem of building and maintaining complex system but "only" an attempt.

I found that message passing is an elegant approach to have interoperability on a very basic level. But when protocols and interactions between objects get more complex, it becomes more difficult retain control and comprehension of the evolving system, thus fundamentally better approaches and methods are needed than what is present in a typical Smalltalk system.

You might be interested in watching Alan Kay's seminar on object-oriented programming, in which he sketches some ideas on how to modularize an OO system, notably, using a kind of specification language to describe the functions/needs of components and letting the underlying system figure out how to hook them up and deliver messages automatically (as opposed to the direct message passing style in traditional Smalltalks). The relevant part can be found here [1], but I found the entire talk worth watching, since a whole set of issues with OOP and Smalltalk (difficulties in finding and reusing components, weak generality) is being touched upon.

Unfortunately, as far as I know, none of the critical ideas have been crystallized into a new kind of Smalltalk - which would be more focused on working on sets of components instead of individual objects/classes (or paraphrasing Alan Kay, making "tissues").

[1] https://www.youtube.com/watch?v=QjJaFG63Hlo&t=5775s


The illusion that we are supposed to uphold is that brilliance and serendipity can be planned for, can be scheduled and made into an academic program. Actually research and scientific discovery should never be the job of someone. It should be a side product of general activity in a field, something that is already useful even without the discovery (if you're poor). Or just lesure hobby activity that is fine if nothing comes out of it (if you're rich).

It's also part of the whole inflation and treadmill thing going on in education and titles. Generations ago, going to high school was something significant. For the next generation, high school was the default and college became a special thing. Now the majority (in the US) goes to college. Within the family it feels like satisfying progress. Parents look on their kids and feel proud that they are making it further than they themselves did. But now a Master's degree can be a bit too generic. Having a PhD can be nice in industry nowadays (though of course not required).

So what used to be a special thing for a limited circle of select few with the background and fallback that allowed failure, is now becoming a factory process. PhDs are produced by the thousands and thousands, and we pretend they all pushed science forward. There is a flood of papers accompanying this. Tiny steps forward or mixing and matching existing things, but selling a big story around it, flag planting, trying to claim a huge area when only a tiny aspect of it is actually demonstrated, drawing out huge conclusions etc.

There are many reasons for this. The general societal inflation is just one thing. The other is the overall quantification, standardization and uniformization in our zeitgeist. There must be a standard process for everything. The politicians and bureaucrats demand to have a standard process for innovation. You must know in advance exactly "how many pieces of innovation" will you produce per year, what will be the impact of it etc. Then when you tell them about your plans, all they hear is the number of buzzwords mentioned from the latest fads and tune out otherwise. Ah and they look at your publication list of course. If you churned out many papers in the past, you will probably do that as well in this round of funding, and those papers can all be attached to the funding agency reports, so that will make the agency look good towards the ministry or whatever.


Very, very nice, but to be more clear: it's not just Chrome.

The modern web is the surveillance web because the modern web is "an endpoint" for users (witch means a new mainframe dumb terminal, at a far higher price and complexity than the old ones) and someone else service, typically run on giants cloud datacenters.

To avoid that, and still have the modern level of life PLUS MANY MORE, we already have had a solution: the desktop computing. The first one on record the modern way AFAIK was the Xerox Star Office System, but even the older NLS (1968!!!) have networking, videoconferencing and screen sharing. YES, you read correctly. Dates are not wrong.

So what? So it's a real mess because since Xerox except for LispM and Plan 9 we do not have developed desktop computing anymore, so the old model might still be rediscovered, at least it still exists, but on actual iron, very badly designed even if far more powerful in mere horsepower terms, and with essentially no modern software. The last living vestige is Emacs, witch is VERY good but not enough evolved for actual end-users, so we need:

- public universities, paid enough by the government and government only, with a very effective separation from the private sector, to create a new old classic research not for short term business but for the society;

- enough years to write a modern desktop, which means around a decade because it's not just a matter of code, it's a matter to learn how classic systems works and design a new one, a thing most today devs do not know and have issue imaging, than time to develop it, to spread it etc.

The other option? Well... A small Canadian TV series (Continuum) recently gives a nice idea, perhaps mixed with a small French movie (Virtual Revolution) to see a more realistic one. Be prepared to a future where you will have to eat only if some IT giants gives food to you. Be prepared to live to work instead of working to live because yes, I'm not joking nor exaggerating, it's just a matter of observing the world today.


Thanks for asking! Hold my bong. ;)

I've never had any reason to use Wayland or desire to learn much about it, so I can't tell you anything technical or from first hand experience.

But I think we have X-Windows to thank for the fact that most Unix programmers use Macs now.

And it's way too late for Wayland to change that fact. Especially with the release of the M1 Max. That ship has sailed.

It didn't matter how much better NeWS was than X-Windows -- it still lost despite all its merits.

And I don't see Wayland as being any more better than X-Windows, than NeWS was better, decades ago.

So simply "being better than X" is not sufficient to displace it, and Wayland isn't even that much better than X, and is even worse in some ways.

The fact that it didn't occur to the Wayland designers to make it extensible with an embedded scripting language like PostScript in NeWS or Lisp in Emacs or Python in Blender or Lua in Redis or JavaScript in web browsers means that it was obsolete from the start, and its designers should have considered the design and architecture of NeWS and Emacs and Blender and Redis and web browsers before designing Yet-Another-X-Windows-Clone. It's not like those ideas were secret, or patented, or hard to find, or wrong.

The world moved up the stack a layer to the web browser, and that's where all the excitement's happening these days, not in the window system layer.

Why have X-Windows or Wayland at all, when you could just run the web browser itself directly on the hardware, as efficiently and flexibly as possible, and implement all your user interface, window management and desktop stuff with modern open standard JavaScript / WebAssembly / JSON / XML / HTML / SVG / CSS / Canvas / WebGL / WebGPU / HTTPS / WebRTC?

I've written about this numerous times before, but I'll transclude and reorganize some of it with checked and updated archive urls to save you the pointing and clicking:

https://news.ycombinator.com/item?id=5844345

>colanderman>And this is my primary issue with Wayland. I cannot fathom why anyone would think it's a sound design decision to bundle a hardware-independent component (the window manager) with a hardware-dependent component (the compositor). This hearkens back to the days of DOS video games – what fun it was to implement support for everyone's sound card! Instead now we'll get to support KMS, Quartz, whatever-the-heck BSD uses, etc.

>Just put a JavaScript (or whatever) interpreter in the window server, and program the window manager locally in that. Then you aren't fucked by synchronization issues. James Gosling did something like that with PostScript many years ago, an alternative to X11, which was then merged with X11, and it was called NeWS (and later X11/NeWS or OpenWindows):

http://en.wikipedia.org/wiki/NeWS

>I've written several window managers / user interface toolkits / tabbed window frames / menu system in PostScript for NeWS. We even wrote an X11 window manager in PostScript, complete with rooms, scrolling virtual desktop, tabbed windows, pie menus, and seamless integration of X11 and NeWS windows.

https://www.youtube.com/watch?v=tMcmQk-q0k4

https://news.ycombinator.com/item?id=5845119

>Having the window manager running as an outboard process, communicating via an asynchronous protocol, is a terrible idea, but is baked into X11 at a very low level -- with the ICCCM thrown in as a kludgy afterthought. The X11 protocol has hacks and band-aids like "mouse button grabs" and "server grabs" to mitigate the problems, but they cause a lot of other problems, complexity and inefficiencies on their own.

https://news.ycombinator.com/item?id=20182294

>That's exactly the point of using HyperLook as a window manager, which we were discussing at Sun in 1991 before they canceled NeWS.

>HyperLook, which was inspired by HyperCard, but written in PostScript for the NeWS window system, let you build specialized task-oriented user interface by assembling and customizing and scripting together components and applets into their own "stacks".

https://news.ycombinator.com/item?id=19007885

>HyperLook (which ran on NeWS, and was like a networked version of Hypercard based on PostScript) was kind of like looking outwards through windows, in the way Rob Pike described the Blit (although it was architecturally quite different).

>I think his point was that window frames should be like dynamically typed polymorphic collections possibly containing multiple differently typed components that can change over time (like JavaScript arrays), not statically typed single-element containers that are permanently bound to the same component type which can't ever change (like C variables).

>With X11, the client comes first, and the window manager simply slaps a generic window frame around it, subject to some pre-defined client-specified customizations like ICCCM properties to control the window dressing, but not much more, and not under the control of the user.

>With HyperLook, the "stack/background/card" or window frame came first, and you (the user at runtime, not just the developer at design time) could compose any other components and applications together in your own stacks, by copying and pasting them out of other stacks or warehouses of pre-configured components.

https://donhopkins.medium.com/hyperlook-nee-hypernews-nee-go...

>For example, I implemented SimCity for HyperLook as a client that ran in a separate process than the window server, and sent messages over the local network and drew into shared memory bitmaps. SimCity had its own custom frames ("screw-up windows") that looked more "mayoral" than normal windows.

https://cdn-images-1.medium.com/max/2560/0*ZM8s95LNxemc5Enz....

>There were multiple map and editor views that you could copy and paste into other stacks while they were running, or you could copy widgets like buttons or drawing editors into the SimCity stack while it was running. So you could copy and paste the "RCI" gauge into your graphics editor window to keep an eye on it while you worked, or paste a clock into your SimCity window to see when it was time to stop playing and get some sleep. And you could even hit the "props" key on the clock to bring up a PostScript graphics editor that let you totally customize how it looked!

https://cdn-images-1.medium.com/max/600/0*oHtC0F5qK83ADw1H.g...

>It had "warehouse" stacks containing collections of pre-configured component prototypes (including widgets, applets, and window management controls like close buttons, resize corners, navigation buttons, clocks, graphics editors, etc) that you could copy and paste into your own stacks, and which would automatically be listed in the "New Object" menu you get in edit mode, to create them in place without opening up the warehouse:

https://cdn-images-1.medium.com/max/600/0*sHClGU8ALljuRQKb.g...

https://cdn-images-1.medium.com/max/600/0*QwIQ_GLxQl1v968F.g...

https://cdn-images-1.medium.com/max/600/0*aWbuo6k_eJuZnUmV.g...

https://cdn-images-1.medium.com/max/600/0*zya4vNBP3libpNSA.g...

https://cdn-images-1.medium.com/max/600/1*G0LWky2iejYm4IGBsU...

>Normal X11 window managers put the cart before the donkey. The window frames are generic, stupid and unscriptable, and can't even be customized by the user, or saved and restored later. The client comes first, with just one client per generic frame. You can't move or copy a widget or panel from one frame to another, to use them together in the same window. That's terribly limited and primitive.

>We implemented a more-or-less traditional (but better than normal) ICCCM X11 window manager in PostScript for X11/NeWS, which had tabbed windows, pie menus, rooms, scrolling virtual desktop, etc, uniformly for all NeWS apps and X11 clients. But the next step (not NeXT Step) was to use HyperLook to break out of the limited one-client-per-frame paradigm.

http://www.art.net/~hopkins/Don/unix-haters/x-windows/i39l.h...

>(It's interesting to note that David Rosenthal, the author of the ICCCM specification, was also one of the architects of NeWS, along with James Gosling.)

>Our plan was to use HyperLook to implement a totally different kind of integrated scriptable X11 (and NeWS) window manager, so you could compose multiple X11 clients into the same frame along with HyperLook widgets and scripts. You could integrate multiple X11 clients and NeWS components into fully customizable seamless task oriented user interfaces, instead of switching between windows and copying and pasting between a bunch of separate monolithic clients that don't know about each other. But unfortunately Sun canceled NeWS before we could do that.

>Here's some more stuff I've written about that stuff, and how window managers should be implemented today in JavaScript:

https://news.ycombinator.com/item?id=18837730

>That's how I implemented tabbed windows in 1988 for the NeWS window system and UniPress Emacs (aka Gosling Emacs aka Evil Software Hoarder Emacs), which supported multiple windows on NeWS and X11 long before Gnu Emacs did. That seemed to me like the most obvious way to do it at the time, since tabs along the top of bottom edges were extremely wasteful of screen space. (That was on a big hires Sun workstation screen, not a tiny little lores Mac display, so you could open a lot more windows, especially with Emacs.)

>It makes them more like a vertical linear menu of opened windows, so you can real all their titles, fit many of them on the screen at once, and you instantly access any one and can pop up pie menus on the tabs to perform window management commands even if the windows themselves are not visible.

https://news.ycombinator.com/item?id=18865242

>But now that there's another Web Browser layer slapped on top of X-Windows (and the terms "client" and "server" switched places), you're stuck with two competing mutually incompatible window managers nested inside of each other, one for the outside frames around apps, and one for the inside tabs around web pages, that don't cooperate with each other, and operate uncannily differently.

>So none of your desktop apps benefit from any of the features of the web browser (unless they include their own web browser like Electron).

>And you can't write lightweight apps in a couple pages of script like "Big Brother Eyes" that run in the shared window server, without requiring their own separate external app with a huge runtime like Electron.

https://www.donhopkins.com/home/archive/news-tape/fun/eye/ey...

>Nor can you easily integrate desktop applications and widgets and web pages together with scripts, the way HyperCard, OpenDoc, CyberDog and HyperLook enabled.

https://medium.com/@donhopkins/hyperlook-nee-hypernews-nee-g...

>Here's how the web browser and window manager should work together seamlessly:

https://news.ycombinator.com/item?id=18837730

>>Now Firefox and Chrome still need decent built-in universally supported and user customizable pie menus, but unfortunately the popup window extension API is inadequate to support them, because there's no way to make them pop up centered on the cursor, or control the popup window shape and transparency. Unfortunately they were only thinking of drop-down linear menus when they designed the API. (Stop thinking inside that damn rectangular box, people!)

>>But I remain hopeful that somebody will eventually rediscover pie menus in combination with tabbed window for the web browser, and implement them properly (not constrained to pop up inside the browser window and be clipped by the window frame, and not just one gimmicky hard coded menu that user's can't change and developers can't use in their own applications). But the poorly designed browser extension APIs still have a hell of a lot of catching up to do with what it was trivial to do in NeWS for all windows 30 years ago.

>And this is why a modern window manager should be written in JavaScript, leverage HTML, Canvas and WebGL, and support Accessibility APIs, as well as screen scraping, pattern recognition, screen casting, virtual desktops, overlays, drawing and image composition, input event synthesis, journaling, macros, runtime user customization and scripting, programming by demonstration, tabbed windows, pie menus, etc:

https://news.ycombinator.com/item?id=18797587

>[...] About 5 years ago I opened this issue, describing an experiment I did making the web browser in a topmost window with a transparent background to implement user interface overlays scripted in HTML.

>WebView window for HTML user interfaces like pie menus to Slate. #322:

https://github.com/jigish/slate/issues/322

>Slate used a hidden WebView for its scripting engine. So I made it un-hidden and float on top of all the other windows, and was easily able to use it to draw any kind of user interface stuff on top of all the other Mac windows. And I could track the position of windows and draw a little clickable tab next to or on top of the window title bar, that you could click on to pop up a pie menu.

>It actually worked! But I didn't take it much further, because I never got any feedback on the issue I opened, so gave up on using Slate itself, and never got around to starting my own JavaScript window manager myself (like you did!). I opened my issue in June 2013, but the last commit was Feb 2013, so development must have stopped by then.

>[...] Think of it like augmented reality for virtualizing desktop user interfaces and web pages.

https://news.ycombinator.com/item?id=5861229

>[...] So I adapted the "uwm" window manager to support pie menus, so you could define your own pie menus and linear menus in your .uwmrc file. Because a window manager could really benefit from pie menus: lots of the commands are spatially oriented and can be arranged in appropriate mnemonic directions, and they have a fixed set of common window management commands that can be thoughtfully arranged into a set of efficient pie menus, as well as a menu definition language to enable users to create custom menus for running apps, connecting with remote hosts, etc.

>[...] I had been using Mitch Bradley's "Forthmacs", which was a very nice Forth system for the 68k. (It eventually evolved into the Sun 4 Forth boot ROMS, OpenFirmware, and the OLPC boot ROMs). It was capable of dynamically linking in C code (well not dll's or shared libraries, but it would actually call the linker to relocate the code to the appropriate place in Forth's address space, read in the relocated code, and write Forth wrappers, so you could call C code from Forth, pass parameters, etc -- SunOS 4.2 didn't have relocatable shared libraries or light weight threads back then, so Forth had to do a lot of the heavy lifting to plug in C code. But Forth was a really great way to integrate a library into an interactive extension language, call it from the Forth command line, build on top of C code in Forth, call back and forth between C and Forth, play around with it from the Forth command line, etc).

https://news.ycombinator.com/item?id=16839825

>Inspired by HyperCard, we (old Sun NeWS hands) also pleaded until we were blue in the face to make HyperLook (a NeWS/PostScript/network based reinterpretation of HyperCard) the window manager / editable scriptable desktop environment!

https://news.ycombinator.com/item?id=18314265

https://medium.com/@donhopkins/hyperlook-nee-hypernews-nee-g...

>Alan Kay on NeWS:

>“I thought NeWS was ‘the right way to go’ (except it missed the live system underneath). It was also very early in commercial personal computing to be able to do a UI using Postscript, so it was impressive that the implementation worked at all.” -Alan Kay

>What’s the Big Deal About HyperCard?

>"I thought HyperCard was quite brilliant in the end-user problems it solved. (It would have been wonderfully better with a deep dynamic language underneath, but I think part of the success of the design is that they didn’t have all the degrees of freedom to worry about, and were just able to concentrate on their end-user’s direct needs."

>"HyperCard is an especially good example of a system that was “finished and smoothed and documented” beautifully. It deserved to be successful. And Apple blew it by not making the design framework the basis of a web browser (as old Parc hands advised in the early 90s …)" -Alan Kay

https://news.ycombinator.com/item?id=8546507

>HyperLook was so far ahead of its time in 1989, that there still isn't anything quite like it for modern technology. Since we developed HyperLook and SimCity at the same time, that forced us to eat our own dog food, and ensure that HyperLook supported everything you needed to develop real world applications. (Not to imply that SimCity is a real world! ;)

http://www.art.net/~hopkins/Don/unix-haters/x-windows/i39l.h...

>Who Should Manage the Windows, X11 or NeWS?

>This is a discussion of ICCCM Window Management for X11/NeWS. One of the horrible problems of X11/NeWS was window management. The X people wanted to wrap NeWS windows up in X frames (that is, OLWM). The NeWS people wanted to do it the other way around, and prototyped an ICCCM window manager in NeWS (mostly object oriented PostScript, and a tiny bit of C), that wrapped X windows up in NeWS window frames.

>Why wrap X windows in NeWS frames? Because NeWS is much better at window management than X. On the surface, it was easy to implement lots of cool features. But deeper, NeWS is capable of synchronizing input events much more reliably than X11, so it can manage the input focus perfectly, where asynchronous X11 window managers fall flat on their face by definition.

>Our next step (if you'll pardon the allusion) was to use HyperNeWS (renamed HyperLook, a graphical user interface system like HyperCard with PostScript) to implemented a totally customizable X window manager!

https://news.ycombinator.com/item?id=13785384

>>You can't do opengl efficiently over the network. At least xorg can't. Most applications uses opengl these days.

>Sure you can, it's just called WebGL! ;)

>They just added another layer, flipped the words "server" and "client" around, and added more hardware.

>Now you run the web browser client on top of the local window system server, through shared memory, without using the network. And both the browser and GPU are locally programmable!

>And then the local web browser client accesses remote web servers over the network, instead of ever using X11's networking ability.

>One way of looking at it in the X11 sense is that a remote app running in the web server acts as a client of the local window server's display and GPU hardware, by downloading JavaScript code to run in the web browser (acting as a programmable middleman near the display), and also shader code to run in the window server's GPU.

>Trying to pigeonhole practices like distributed network and GPU programming into simplistic dichotomies like "client/server," or partition user interface programming into holy trinities like "model/view/controller," just oversimplifies reality and unnecessarily limits designs.

https://news.ycombinator.com/item?id=20183838

>Imglorp> Hi Don! Tabs aside, NeWS got so many things right, and not just at the UI but the underlying idea of code instead of bitmaps. I wonder if we were going to start afresh with Wayland/X11 now, bare hardware, what place do you think the NeWS lessons would have? Would a modern NeWS be on top of PS or is there something better now?

>Hi Imglorp!

>Simply use a standard JavaScript / WebAssembly / WebGL / Canvas / HTML based web browser as the window system itself! And use WebSocket/SocketIO/RTP/HTTP instead of the X-Windows or VNC protocols.

>Microsoft kinda-sorta did a half-assed inside-out version of that with Active Desktop, but Internet Explorer wasn't powerful or stable enough to do it right, and it didn't eliminate and replace the whole Win32 / MFC layer, which misses the main point.

https://en.wikipedia.org/wiki/Active_Desktop

>There was recently this discussion about a browser based Mac window manager project (now offline), and there have been others like it (Slate), but the ideal goal is to completely eliminate the underlying window system and just use pure open web technologies directly on the metal:

>Show HN: Autumn – A macOS window manager for (Type|Java)Script hackers (sephware.com):

https://news.ycombinator.com/item?id=18794928

>Site archive:

https://web.archive.org/web/20190101121003/https://sephware....

Comments:

https://news.ycombinator.com/item?id=18797587

>I was also quite inspired by Slate. Unfortunately there hasn't been any activity with it for about 5 years or so. It's great you're picking up the mantel and running with it, because the essential idea is great!

https://news.ycombinator.com/item?id=18797818

>Here are some other interesting things related to scriptable window management and accessibility to check out:

>aQuery -- Like jQuery for Accessibility

https://web.archive.org/web/20180317054320/https://donhopkin...

>It would also be great to flesh out the accessibility and speech recognition APIs, and make it possible to write all kinds of intelligent application automation and integration scripts, bots, with nice HTML user interfaces in JavaScript. Take a look at what Dragon Naturally Speaking has done with Python:

https://github.com/t4ngo/dragonfly

>Morgan Dixon's work with Prefab is brilliant.

>I would like to discuss how we could integrate Prefab with a Javascriptable, extensible API like aQuery, so you could write "selectors" that used prefab's pattern recognition techniques, bind those to JavaScript event handlers, and write high level widgets on top of that in JavaScript, and implement the graphical overlays and gui enhancements in HTML/Canvas/etc like I've done with Slate and the WebView overlay.

>Web Site: Morgan Dixon's Home Page.

http://morgandixon.net/

Web Site: Prefab: The Pixel-Based Reverse Engineering Toolkit.

https://web.archive.org/web/20130104165553/http://homes.cs.w...

>Video: Prefab: What if We Could Modify Any Interface? Target aware pointing techniques, bubble cursor, sticky icons, adding advanced behaviors to existing interfaces, independent of the tools used to implement those interfaces, platform agnostic enhancements, same Prefab code works on Windows and Mac, and across remote desktops, widget state awareness, widget transition tracking, side views, parameter preview spectrums for multi-parameter space exploration, prefab implements parameter spectrum preview interfaces for both unmodified Gimp and Photoshop:

http://www.youtube.com/watch?v=lju6IIteg9Q

>PDF: A General-Purpose Target-Aware Pointing Enhancement Using Pixel-Level Analysis of Graphical Interfaces. Morgan Dixon, James Fogarty, and Jacob O. Wobbrock. (2012). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '12. ACM, New York, NY, 3167-3176. 23%.

https://web.archive.org/web/20150714010941/http://homes.cs.w...

>Video: Content and Hierarchy in Prefab: What if anybody could modify any interface? Reverse engineering guis from their pixels, addresses hierarchy and content, identifying hierarchical tree structure, recognizing text, stencil based tutorials, adaptive gui visualization, ephemeral adaptation technique for arbitrary desktop interfaces, dynamic interface language translation, UI customization, re-rendering widgets, Skype favorite widgets tab:

http://www.youtube.com/watch?v=w4S5ZtnaUKE

>PDF: Content and Hierarchy in Pixel-Based Methods for Reverse-Engineering Interface Structure. Morgan Dixon, Daniel Leventhal, and James Fogarty. (2011). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '11. ACM, New York, NY, 969-978. 26%.

https://web.archive.org/web/20150714010931/http://homes.cs.w...

>Video: Sliding Widgets, States, and Styles in Prefab. Adapting desktop interfaces for touch screen use, with sliding widgets, slow fine tuned pointing with magnification, simulating rollover to reveal tooltips:

https://www.youtube.com/watch?v=8LMSYI4i7wk

>Video: A General-Purpose Bubble Cursor. A general purpose target aware pointing enhancement, target editor:

http://www.youtube.com/watch?v=46EopD_2K_4

>PDF: Prefab: Implementing Advanced Behaviors Using Pixel-Based Reverse Engineering of Interface Structure. Morgan Dixon and James Fogarty. (2010). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '10. ACM, New York, NY, 1525-1534. 22%

https://web.archive.org/web/20150714010936/http://homes.cs.w...

>PDF: Prefab: What if Every GUI Were Open-Source? Morgan Dixon and James Fogarty. (2010). Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '10. ACM, New York, NY, 851-854.

https://web.archive.org/web/20150714010936/http://homes.cs.w...

Morgan Dixon's Research Statement:

https://web.archive.org/web/20160322221523/http://morgandixo...

>Community-Driven Interface Tools

>Today, most interfaces are designed by teams of people who are collocated and highly skilled. Moreover, any changes to an interface are implemented by the original developers and designers who own the source code. In contrast, I envision a future where distributed online communities rapidly construct and improve interfaces. Similar to the Wikipedia editing process, I hope to explore new interface design tools that fully democratize the design of interfaces. Wikipedia provides static content, and so people can collectively author articles using a very basic Wiki editor. However, community-driven interface tools will require a combination of sophisticated programming-by-demonstration techniques, crowdsourcing and social systems, interaction design, software engineering strategies, and interactive machine learning.


The author explains the trend of lower innovation in personal computing, which I think we all feel, as to do with corporate greed and the solution is "academics, non profits, non-VC funded companies, hobbyists" etc. But this is not a great explanation because human nature didn't really change since the 1980s, yet personal computing did.

These sweeping but vague explanations are tempting, but I think the cause is actually a lot more technical and random: the OS design culture that spawned Windows, UNIX and macOS pre-dated the internet. The internet was a fundamental shift that changed everything, and yet the NT and Apple design teams all traced their roots back to the pre-internet era (and Linux just made a UNIX for the PC, same thing, design rooted in the 80s). Their whole OS design thinking was fundamentally pre-internet and never caught up. Even basic tasks like uploading files to a server through a GUI STILL aren't well supported in desktop operating systems, in 2023!

What platform did fundamentally get the internet? The web - born of the internet, made of the internet. Web apps don't even start up without the internet. Whilst Microsoft snoozed, Apple thrashed around and the UNIX people spent all their efforts on cloning the past, web browsers became the new way software got distributed. The web had so many benefits over writing PC/Mac apps. Suddenly you could iterate every day, you could write the bulk of your software in nice high level scripting languages, your software was accessible by just typing in its name, you could use databases instead of designing binary file formats, app UI was automatically portable between operating systems, typography and layout were taken seriously, people could easily discover and access your software with one click on a link, the platform had a nice open easy to use protocol set up for you out of the box. And dozens more.

So many wins for developers and users in there. Unfortunately, it was never meant for this. People spent vast effort hacking the web into a barely-passable app platform because it was the closest thing to an Internet OS, and nobody else was offering that. But the web was simply never thought about by its developers in terms of empowering personal computing users. Whilst the people at PARC, NeXT, Apple, Microsoft and so on spent lots of time thinking big 1990s era thoughts about making programming easy, computers powerful, component systems, filing systems with search, AppleScripting, better window management and so on, this utopian way of thinking never entered the culture of browser developers.

There was a brief attempt at some sort of structured higher level planning, that open hackable spirit during the W3C era, but the XML stack was made by academics and despite good intentions a failure. So browser makers just kept randomly flinging stuff into HTML, but without the animating forward-looking spirit of the PC era, in fact without much of a vision at all beyond trying to keep the show on the road and better competing with 'real' app platforms.

It could easily have been very different. If the people doing OS development in the 90s had fully understood the importance of being an internet OS we might have had things like a global filesystem namespace, start menus that could directly launch sandboxed OS-native apps from a domain name, internet sharable clipboards, who knows what else. Unfortunately that's not how it worked out. Today OS research is basically dead. ChromeOS is the newest and probably the least powerful, least ambitious OS ever made. On the rare occasions people get to make a new OS they never make it past micro-kernels. The kind of sweeping ambition that launched the PC era is gone.


I think lack of understanding the data/code parity is a huge part of why so many development ecosystems are broken right now. The lack of metaprogramming in other languages has spawned this proliferation of declarative programming that results from the use of build systems (config files are declarative) and filesystem tools (git, ember-cli) and psuedolanguages (SQL, jquery selectors).

Because we’re not writing these tools in first class language structures, not only do we have to learn 15 different “languages” but we have to wait for each author of each tool to go through all of the stages of programming language design theory. We watch webpack.config.js slowly morph into a poorly designed Haskell, and again and again.

But here’s where you expect me to descend into the classic graybeard “if only these kids would write everything in Lisp and make use of the Great Works of Academic Software” style speech.

No...

Quite the opposite.

I think both camps are wasting their lives. The graybeards are blowing smoke up their own asses about how useful their meta-control structures are, and 97% of those Great Works could be implemented with an isomorphic interface made entirely of functions and literals. The trouble is any person with a masters degree who smells an opportunity to do metaprogramming gets a raging hard on that clouds their judgement until the coding is completed and has a cute name announced on a mailing list. That’s certainly what felt like the glamorous lifestyle to me growing up in the web development world.

But then the Modern coder, using layer upon layer of frameworks and “tools” on a thousand different run times, “getting the job done” and remarking how ergonomic the latest tool is and how much nicer things were than back in the nasty era where you had to write everything from scratch and there were no standards and no consistency... well their codebases, brittle, and calcified by the demands of their customers, are condemned to corrode and become unworkable as declarative control structures rise and fall. Because each piece of the metaprogramming logic is a completely different language, the interfaces between components can’t move as fluidly as a simple function signature, written in the same language your application is written in.

So who cares if it’s Lisp or PHP. Kotlin. Lua. Whatever it is, stop using fancy control structures to prove how smart you are. And stop using fake programming languages that operate as command line parameters or config file grammars, or “Controllers” that magically pick up attributes you sprinkle about.

Then we’ll really find out what Lisp can do.


Well it sort of was boring. I was young and single and trying to chase girls. Trying but not succeeding.

I eventually got bored and wanted something else to do and since I was hanging out in bars, pool popped up as an option. Other than some screwing around as a kid, I'd never played pool. So I started learning. Turns out that pool is this neat combination of physics, chess, and a dash of human psychology. The physics part is obvious, the chess part is thinking multiple moves ahead (when you get good you know all the shots you are going to take after you break), the psychology part is messing with the other person (even if they are better and they leave you no shot you can do the same to them).

So my routine became sleep in until about 10:30 or 11am, hop on my motorcycle and go over to a taco place at 11th & Folsom in San Francisco (which is where I was living at the time), get some food, wander over to the Paradise Lounge and practice.

The Paradise Lounge was a bar that opened around noon, and during the daytime was frequented by drunk taxi drivers, hookers, and various low lifes. I hung out there because they opened up the pool tables and let you play for free during the day. And for Ron.

Ron was one of the low life types. He was from Minnesota but he had been pushed out. He got in a fight with his brother and punched him in the chest and the brother died (this is not widely enough known: in young males, in a one second time period, there are about 3 milliseconds in which, if enough of a shock is caused, they will have a heart attack and usually die). So his family threw him out.

He ended getting a job at a pool hall and they let him sleep there. He learned pool there and practiced in a way that he got very good. In pool, when playing 8 ball, there is this thing called "8 ball choke". The 8 ball is the one you shoot last and lots of people freeze up on that shot and mess up easy shots. So Ron took all the 8 balls from all the regular tables and practiced on a snooker table (snooker tables are much larger and have smaller pockets for smaller balls. If you can make shots on a snooker table with regular balls you are extremely accurate). So he got good and no 8 ball choke for him.

By the time I met him he was something of a legend in that bar. As I came to be part of the regular crew everyone watched me play Ron. Well, watched Ron beat me. He could beat me left handed, he could beat me one handed, he could beat me one handed with his left hand, he could beat me in napkin pool (before each shot he would put a cocktail napkin on the table and if the cue ball wasn't on the napkin after his shot then it was his turn).

I kept coming back for more and I was getting better. I'd play Ron from about noon until they closed the tables and I started playing the "bridge and tunnel" people who came in to party in the evening. I dunno if this is a universal thing but in America it's tradition that if you win the game the next opponent pays for the next game. It's called "holding the table" and if you are good then you can play all night for free. I was slowly becoming that good. So long as Ron wasn't there.

Me being the nerd that I am, it's 6 months in, and I own my own cue. Which is a stupid and sort of a douche move, if you are holding anything other than a bar cue it screams "I think I'm better than you" so every pool hall hustler wants to beat you; many did. But less and less as time went on.

Somewhere around 6-7 months into this the unthinkable happened. I beat Ron fair and square. The bar full of taxi drivers and hookers and other random folk, all of whom had watched me go from completely crappy to beating Ron, burst into applause. To give you some idea of how much of a mess I was at the time, I felt like a million bucks. This bar full of people had accepted me and were rooting for me. At the time, it felt fantastic. Later I was like "WTF? You care what a bunch of losers think?" Yep, at the time I did. At the time, they were my crowd. Weird in retrospect.

Remember that I was originally chasing girls? I'd long since given up, I was never good at that until just before I met my wife. And I was becoming frustrated with playing at the Paradise Lounge in the evening because each time a new set of girls walked in you could feel them scanning the bar, checking out the guys, you could almost hear them going "nope, nope, maybe, nope ...". I always got the nopes and it would throw me off my game from time to time.

So I started branching out. Played in lots of different pool halls, joined a league, etc. One night the nope got to me so I packed up and left. It was early and the way I went home was down Market street, through the Castro, over to Noe Valley. There was a place on Market I liked to eat, that's why the weird route. As I was going down Castro street there was a bar open with a pool table. I knew this was the gay district but I thought why the hell not? No girls to throw me off my game.

Into the bar I go. Put my quarters on the table, get a beer, wait my turn. Turned out to be one of the coolest experiences of my life. The gay guys have pretty refined gaydar, or I'm just too damn ugly, because they were completely uninterested in me. It was like I was a fly on the wall watching.

The atmosphere in there was really cool, it was kind of like that playfulness you had as a little kid in a sandbox, playing with trucks and other little kids that were playing with trucks. It felt almost innocent. Except with a heavy dose of sexuality. It's hard to explain, but I was jealous of the gay guys. They get to do stuff that if I did that to a girl in a bar I'd get slapped and/or get sent to jail for assault. I could be in the middle of game, my opponent would be leaning over to make a shot, some other dude walks up, reaches between his legs, and cups his junk. The player would look back at the dude and it would be one of two reactions: "Uh, no thanks dude, I want to finish this game" or "Larry, nice to meet ya, but I gotta go, can you find someone to finish the game?" No drama, it was yes or no, the guy who came onto the dude would be cool and walk away if that was the message. You could never do that to a girl and get away with it, at least I couldn't.

So even though the gay guys have a tough life, what with the stigma, AIDS, etc, they get to play with each other in a way straight dudes could only dream about. Like I said, I was jealous.

I played at that gay bar for a few weeks or a month, it was fun, but eventually I missed the girls even if I was Mr Nope. So back to the Paradise I go and guess what? This smoking hot mexican girl plays pool with me, she wasn't very good but she was a lot better than average, so it was fun, and I drew out that game until we were both on the 8 ball. Went out with a length of the table bank shot (I had a habit of doing those when I was a lot better than the other person. A little douchey but it gave them a chance because I missed those about maybe 30% of the time). She whistled and gave me her phone number and told me to call her. I think that's maybe the only time a girl did that.

I didn't call her, I was too chicken. A couple of weeks later she's there again, walks right up to me, looks up at me and says "How come you didn't call?". "Oh, I didn't think you really wanted me to call you". "You call me, you hear?" So I did, we ended up dating for a couple of years. She was pretty crazy but man, so hot, as in the bar gets quiet and every dude is looking at her when she walks in, like that hot. So fun but it eventually ended, she had no drive and I needed to be with someone with some drive.

I eventually went back to work, Sun let me do whatever I wanted, I ended up working for Paul Borrill. Did lmbench and some networking stuff. Paul asked me if I wanted to go give a talk at Hot Interconnects on ethernet vs ATM. Hell yes I did! I hated ATM, thought it was stupid, I was pushing for 100Mbit ethernet, Paul knew all that and he knew I'd badmouth ATM, which I think he wanted but he was far too politically astute to say that out loud. So off I went to the conference, gave my rant (because that's what it was) about how Sun's $4000 (their cost at the time I believe) ATM card was never going to be as cheap as the $50 ethernet card I had bought at Fry's on my way to the talk. The room went dead silent, I think because everyone was going "but my boss says I have to work on ATM". Finally, some Indian guy in the back started clapping and then the room exploded. So I think they agreed and history has shown all us to be right on that one. And it helped get me out of my burnout funk.

Fun times, sort of, but I don't recommend getting so burned out that it takes a year of pool to pull you back. But a huge thank you to Ken Okin for letting me take that time and to Paul Borrill for nudging me back into work.

Sorry for the wall of text :)

Edit: typos mostly, and a thank you to Paul.


I've taught at 3 Universities as, two of those were as adjunct faculty. I've worked in an industrial research lab. I've worked as a University employee on research grant projects.

You seem to dismiss "the freedom to do research" as an important goal, needing much less priority than teaching.

Let me share some pseronal observations about both teaching and research.

Adjunct teaching is free labor these days. I love to teach but my adjunct gigs barely covered the cost of travel and food. Given the number of hours spent preparing courses, running labs, office hours, and actual teaching, the paycheck was WAY below minimum wage. I can't afford to teach.

There are some facts about research you need to consider. First, research takes time. Second, it takes money. Third, it takes focus. Fourth, research usually doesn't succeed. Fifth, we really NEED researchers. Lets look at them one at a time.

Research takes time. A research project can take 10 years to produce something that can be useful. Do you like the wonderful new AI Neural Network projects? Are you amazed at the Alpha-Go program that beat the world Go champ? I worked on that exact problem...in the early 1990s...using Neural Nets. After a couple years I gave up. The current leaders (overnight successes) didn't give up and didn't succeed until around 2010. That's the advantage of being a tenured professor.

Research takes money. There used to be government research labs (until Reagan killed them). Big companies ran big labs. Xerox had PARC (which gave us windowing, mice, and the ethernet), Bell Labs gave us microwaves (the device), and check readers (early Neural Nets), IBM gave us disk drives and scanning microscopes. Universities gave us AI and robots. ARPA (the government) gave us funding for a thousand new things.

Now? Not so much. Government labs are gone. Xerox PARC and Bell Labs are gone. IBM management is doing everything they can to starve IBM Research. Microsoft killed a research division. Google (Alphabet) is killing research efforts. ARPA became DARPA so all your research had to be for "defense".

All of the tenured professors I know spend a LOT of their time begging for money. A tenured professor's "research" these days amounts to hunting for money to pay for students to do the actual research. The military is the primary source of funding these days... and you wonder why "killer robots" are coming?

Research takes focus. IBM Research (Ralph Gomory) pointed out that a research project takes an average of 10 years to produce a result; e.g. Axiom, a computer algebra system developed at IBM Research, took over 10 years. It involved dozens of researchers and millions of dollars as well as dedicate management and lab space. It was a career project for the project lead (Richard Jenks). The Neural Network story is the same, except it happened in Canada. Tenure gives you the chance to create an area of science (e.g. Gilbert Baumslag in Infinite Group Theory).

Research rarely "succeeds". Gomory (IBM Research Pres.) pointed out that less than 10% of all research projects produce a useful result, e.g. my work on re-writable paper, similar to the Kindle paperwhite, my work on automated robot assembly from CAD drawings, etc. Unfortunately you can't predict which research projects WILL succeed. Certainly nobody would have predicted that Neural Nets would beat a Go champ (and thus, my NN-Go work would be vindicated?).

Fifth, we really NEED researchers. When my parents were born plastic did not exist. Everything was wood and stone. Look around you and count the number of plastic things you can touch. When I was born transistors didn't exist. You probably can't count the number of transistors you use in a day. When you were born self-driving cars didn't exist. In 20 years you won't know a single person who knows how to drive.

So now you propose that professors no longer get tenure. You want them to justify their job every year (they do already in the form of grant applications). Next you will propose that professors get rated on a scale of 1-5 and that the '5' performers should be fired. IBM Research did that and it killed the whole division. Edward Demming (helped restore Japan to a world economic power) debunked the whole rating idea but management won't listen.

Tenured professors are the "last man standing" in the research arena. Beware what you kill; there will be nothing left.

I have a counter-proposal... Professors should get tenure AND GUARANTEED 100k per year minimum funding for the life of the tenure. Sure, some will "retire on the job" but some will create Deep Neural Networks. Place your bets.


> While I can understand the desire to avoid misallocation, this model assumes the funder has a better idea of what needs resources that the funded.

Reminds me of a (probably the) reason we're doing standardized testing in schools. A decent teacher is able to teach and test kids much better than standardized tests, but the society needs consistency more than quality, and we don't trust that every teacher will try to be good at their job. That (arguably, justified) lack of trust leads us as a society to choose worse but more consistent and people-independent process.

I'm starting to see this trend everywhere, and I'm not sure I like it. Consistency is an important thing (it lets us abstract things away more easily, helping turn systems into black boxes which can be composed better), but we're losing a lot of efficiency that comes from just trusting the other guy.

I'd argue we don't have many PARC and MIT around anymore because the R&D process matured. The funding structures are now established, and they prefer consistency (and safety) over effectiveness. But while throwing money at random people will indeed lead to lots of waste, I'd argue that in some areas, we need to take that risk and start throwing extra money at some smart people, also isolating them from demands of policy and markets. It's probably easier for companies to do that (especially before processes mature), because they're smaller - that's why we had PARC, Skunk Works, experimental projects at Google, and an occasional billionaire trying to save the world.

TL;DR: we're being slowed down by processes, exchanging peak efficiency for consistency of output.


I've actually wondered if a True Hipster (tm) ever thought of starting a company to develop software for vintage Apple hardware emulated by one of the many open source choices available. The thinking here is that such software affords a higher degree of reliability since the platform is totally dead.

The thinking for a long time was that computer software was compute-limited, that all applications were as intensive as video games, audio / video editing, and 3D rendering. In reality, people may pay for innovative interfaces and data formats when they organize their work in a new way. Software could be used in an "offensive" way to take entire industries as interlocking roles within organizations. Each of these roles, save manual labor and janitorial work, benefits from education, and the nature of that education is to develop certain patterns of thought & behavior.

Software could be designed with the educational background of the user in mind. I'm speaking mostly of enterprise software here, but it could be applied to CAD shops, publishing, anyone who has an education and uses a computer. I'm not talking about Mechanical Turk here.

The idea is to take education philosophy as the common software for all education computer-enhanced roles and design software for educated people that exposes stuff like

    - the ideal way to learn the structure of the software data model

    - text and point-and-click data input and output

    - data processing language

And I still disagree with you here. It's not about me and my personal employment situation; specifically, because my situation can always be improved via a purely relative advancement that would have no net benefit if everyone did it (see: "Darwin the Market Whiz"). Also, generally, because I personally am doing decently right now: I damn well wanted to go back to academia and get a research degree, and I've lucked out to get into a damn fine research institution. I've had to turn down 1 offer to convert a a good contract gig to full-time and turn away four recruiters pursue that (not to mention previous interviews where I was told I was turned down because they saw that I truly fit in grad-school more than I fit in their team right this month). I'm also continuing my open-source/research project and working on a side-business idea in the meanwhile. But just because I'm ok doesn't mean everyone else is ok, once we dispose of the false assumption that I'm average.

The real problem here is that you have half a generation 18-22 year-olds coming of age, trying to leave their parents' care, and finding that there's basically no demand for their labor.

Millions of people can't all have more hustle, moxie, initiative, mojo, or whatever other nigh-meaningless abstract term we've chosen to convey "the capitalistic equivalent of sex-appeal", than each other. There has to be actual demand for labor to hire these people.

Conversely, even though I'm not average, an overall bad labor market affects me. The tech sector is "recession-proof", but nothing is Great Depression proof. An ultra-capitalist economy geared towards maximizing debts and rents for bankers, lobbyists and lawyers does, in fact, ripple out to the tech sector and affect hiring. For instance, it means that there are very few R&D labs in computing right now (though a friend of mine has been interviewing with R&D teams at Oracle and I'll be happy to have the connection!), lots of VC-funded start-ups, and much of the world's top technical and scientific talent ends up writing financial algorithms. Someone who wants to actually do hard-core technology like me finds himself really curiously starved for places to work, given how well the tech-sector is supposedly doing. Oh, and everyone is wondering when this latest start-up bubble will pop, especially after GroupOn, Facebook, and Zynga IPOs.


"like healthy eating and regular exercise everyone knows about it and yet businesses don't do it."

I think that's the meat of the problem—they're well-known and well understood ways to improve organizations, but they're difficult and against our nature. It's not that they're not respected at all, it's that they're largely not implemented, and I perceive that as a real lack of respect.

Let's put it this way: most managers and high-level executives I know of (and even ones I know personally) are very much all about blaming individuals and controlling companies through individual motivation, carrot-and-stick and hiring-and-firing. I think this is also a prevalent way of thinking in the business-school world, with very few schools teaching true statistics- and process-based management in the Deming style.

That is what I mean by lack of respect, and I stand by it: Deming has very little respect in the business world as a whole. His ideas brush up against everything stereotypical American MBA's believe to their core: that the individual is responsible for his own performance, that worker motivation comes from punishment and reward, and that fundamentally, it is individuals that drive success or failure of an organization, regardless of the system they work in.

In my opinion, now just speculating, I think this is deeply embedded in certain American Republican political and social ideologies: this idea that the individual is responsible for his own destiny and his own success, that there should be no safety nets and no dependence on outside influences. For many American executives, I think this is the prevalent way of thinking, and it's self-reinforced among the groups with which they associate. It's also severely at odds with Deming's ideas, and severely at odds with scientific facts (mathematical and social/psychological). But I won't get into that.

The sad part is that it extends to other areas of society as well; any place these ideologies infect the system. No Child Left Behind was fundamentally a way to seek out individual failure cases using pervasive testing and changed the entire system because of it. And it's done the same thing to the education system as it has been doing for quite some time to the corporate world. It sucked the life out of it and made it a social blame game with the wrong incentives, leading to the wrong results.


Yes, I think we should build more DSL toolkits. Turing-complete languages are too expressive.

If we had a meta-language like Lisp or ML that had very good capabilities to quickly develop DSLs with very clean restricted semantics for every particular component of a big system, and tooling to automate proofs for said restricted DSLs, software would be more robust and easy to develop.

Alan Kay was pursuing parts of this vision at Viewpoints Research Institute. Racket is also very focused on DSLs. Any others?


Dijkstra once explained how his process for writing code is to state\solve the problem in the most obvious and natural language possible. If a compiler\interpreter existed for that language, then he's done, he now has an executable solution to the problem. If not, then he recursively solves the problem of translating each construct of this non-executable-but-convenient language down into other constructs, if those constructs are executable, well and good, if not, he has to break them down further.

Learning new programming languages, paradigms, architectures, approaches and formalisms aids this process because it gives you more mental building blocks to use in your journey from non-executable specifications to executable implementations. Even if you never write haskell, learning it makes your brain evolve a haskell-like pseudocode in which to think and express problems, and this can come in handy when the problem is most naturally expressed in haskell. If a haskell implementation exist, great, if not, transform the haskell formulation gradually into whatever executable form available. This, for a certain class of problems - let us rather unimaginatively call them 'haskell problems' -, is better and more enlightening than attempting to directly solve them in executable format.

------

One common saying is "You Can Write Fortran In Any Language", another one is "Any Sufficiently Complicated C Program Contains An Ad-hoc Informal Implementation Of Common Lisp".

The fundamental truth that both of those aphorisms hint at is that programmers are human compilers, the programming language they write in is their target language, the "assembly", and the description of of the problem they are solving is the source language they are compiling. Here's the thing though : just like actual compilers, programmers don't have to compile the source language all in one go. Machine compilers often go through several detours and meander through different intermediate representation of the code being translated before outputting the final target. The equivalent of this for programmers is the Dijkstra process, describe your problems in a hierarchy of languages that ends, not begins, with the programming language you happen to use.

If the problem is best described as a Fortran program, write it (in your head) as a Fortran program then implement whatever necessary of Fortran in the actual available language to write the program. If the problem is best described as Common Lisp program, then think about it in your head as a Common Lisp program then implement the necessary parts of Common Lisp in C to write the program (The full quote is only saying this is bad insofar as it happens non-deliberately and haphazardly, if it's deliberate then it's just good design). Programming Languages are notations/pseudocode/ways of thought/mental models/semantic repositories of meaning, they can be useful even if there is not a single executable implementation of them in sight.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: