Hacker Newsnew | past | comments | ask | show | jobs | submit | more inopinatus's favoriteslogin

Reminds me of this good old rant from Peter Welch, Programming Sucks

    ...

    Every programmer occasionally, when nobody’s home, turns off the lights, pours a glass of scotch, puts on some light German electronica, and opens up a file on their computer. It’s a different file for every programmer. Sometimes they wrote it, sometimes they found it and knew they had to save it. They read over the lines, and weep at their beauty, then the tears turn bitter as they remember the rest of the files and the inevitable collapse of all that is good and true in the world.

    This file is Good Code. It has sensible and consistent names for functions and variables. It’s concise. It doesn’t do anything obviously stupid. It has never had to live in the wild, or answer to a sales team. It does exactly one, mundane, specific thing, and it does it well. It was written by a single person, and never touched by another. It reads like poetry written by someone over thirty.

    ... 
[ https://www.stilldrinking.org/programming-sucks ]

Slightly offtopic but anyone with a dark sense of humour would do well to check out Chris Morris's stuff - I get a feeling most younger Brits haven't heard of it. Day Today and Brass Eye, both still funny, are wonderful time capsules satirising Britain as it was thirty years ago.

But IMO his finest work was Blue Jam - the radio comedy not the TV incarnation, hour-long episodes of low-key music and surreal sketches. Absolutely brilliant even today. Archive.org has a copy at https://archive.org/details/chrismorris_bluejam. Best enjoyed late at night.

Trigger warning: basically everything. The BBC would never get away with broadcasting it now.


Very interesting essay. Reminds me of how Donald Knuth describes his job:

> Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the bottom of things. What I do takes long hours of studying and uninterruptible concentration. I try to learn certain areas of computer science exhaustively; then I try to digest that knowledge into a form that is accessible to people who don't have time for such study.

https://www-cs-faculty.stanford.edu/~knuth/email.html

It's an aspiration for how I want my career to go, though I haven't been very effective at moving in that direction.


:focus-within is a favourite of mine since it is one of the few CSS selectors where child element state is significant. Thus it is very nice for drop-downs. It also works with CSS transitions, so my pure CSS drop-downs have a 150ms easing in & out (tip: transition the visibility property, since display:none can't be delayed).

There is another, however. An element whose state depends on that of other elements, and it's even more general. A form element moves between :valid and :invalid based on its inputs, allowing us to use form:{in}valid with any of the descendant, child, sibling or adjacent combinators. A hidden required checkbox is sufficient. Radio inputs work too (tip: you can clear radios back to :invalid with an input type="reset").

The really, truly monstrous part of this, however, is that the associated input doesn't even have to be a child of the form. Using the form attribute (i.e., <input form="whatever">) means they can be anywhere on the page, and they can themselves be hidden and targeted from somewhere else on the page again, with a <label> element.

I once documented the horrifying potential of this in a company wiki, along with a lovely modal slideover that was wrapped in a <form> element and transitioned to visible based on form:valid via a hidden required radio button, and whose backdrop was a reset button, and this was rightly labelled NSFW and banned by popular acclaim from ever appearing in our HTML.


BoiledCabbage:

You are on the right track!

[Dedekind 1888] started out on the correct path by defining

natural numbers up to a unique isomorphism.

Unfortunately, there was long detour through 1st-order logic :-(

Powerful foundations are now urgently required to prevent

successful cyberattacks.

See the following:

https://www.youtube.com/watch?v=AJP1VL7shiI


Remember that Apple is a global empire. If you have met any of their people, they learn a kind of culture language which diverts and uncenters themselves where no one person other than Cook can be seen to represent it. Understanding their brand language starts with the idea that Apple is an ideal of perfection, and beneath it are the ideals of harmony and flow. As an ideal, Apple does not have defects. Only things and maybe the past can have defects.

These other things like bugs and vulnerabilities are external to its perfection, and so they originate elsewhere, maybe in the past, maybe as something random, but certainly something devoid of meaning when compared to the ideal and its experience. Individual products are not Apple, because they are not perfection, but they align to it, and perfection is what makes it always seem just out of reach. The Apple experience exists above and over the material bounds of memory handling and input validation, so these things are external, and in the brand language, they are only ever allowed to exist in the past. By way of example, this is why their security advisories can come off as weird to analysts who confront and solve things.

The best writers speak the language of memory, and a trillion dollar company probably has more than a few of them. Consider that 80% or more of what you believe about reality comes through one of their products, and you are in-effect entranced by them. Their responsibility is to sustain this experience of hypnotic comfort and perfection. The advisory language is reduced until there is nothing left to remove, then calibrated to cause nothing more than a small ripple in your bliss.


I always use antirez's (Redis creator) `sds` and advertise it whenvever I get the chance. Thanks to the someone who recommended it on HN some years ago. It's a joy to use.

https://github.com/antirez/sds

The trick is the size is hidden before the adress of the buffer.("Learn this one simple trick that will change your life for ever").

From the Readme:

```

Advantage #1: you can pass SDS strings to functions designed for C functions without accessing a struct member or calling a function

Advantage #2: accessing individual chars is straightforward.

Advantage #3: single allocation has better cache locality. Usually when you access a string created by a string library using a structure, you have two different allocations for the structure representing the string, and the actual buffer holding the string. Over the time the buffer is reallocated, and it is likely that it ends in a totally different part of memory compared to the structure itself. Since modern programs performances are often dominated by cache misses, SDS may perform better in many workloads.

```


Very often, in engineering or design or even art, the best feature is actually a constraint.

I used to do research at the Shedd Aquarium. After watching a particularly clever octopus defeat every attempt to prevent him leaving his enclosure at night, I am absolutely confident that, just like certain aquatic mammals such as Orca whales, that we only devalue their intelligence because they were unfortunate enough to not be in a situation where they could develop significant tool use and the cultural artifacts that said tool use and creation enables.

In previous threads on this topic I don't think I've explained deeply why I find their defeat of our team so impressive. This research group contained a multidisciplinary set of scientists. I was the only member that did not have a PhD; almost every team member had completed at least one significant postdoc as well (think Stanford, UChicago, Caltech, prestigious national labs, etc). We had applied science and engineering talents in addition to pure science, so this wasn't a case of not being able to develop realistic escape prevention mechanisms due to the team being too theoretical. The longest we were able to stop this clever guy from escaping with one of our implementations was 4 days. He usually made us look like idiots the very evening after we installed our new prevention device.

Not only this, the octopus made very clear that he had an extremely well developed memory. He clearly recalled his favorite scientist who hadn't visited in a few years when, as soon as said scientist entered the room, the octopus ignored the rest of us and followed him the entire time he would be in the room. He also became what I can only describe as depressed when that individual departed once again; this was a period of time when he stopped eating as much, moved much more lethargically, and his escape attempts were half-hearted - this was the period when he finally took more than one day to break our attempts at keeping him in.

Further, he absolutely had a sense of humour. After seeing us crack up laughing at him wearing this plastic rings we had put in his tank as jewelry (we were setting up some sort of exam that I can no longer recall the purpose of) by placing them on his tentacles, he would do so every time we entered the room. Otherwise, he ignored the rings entirely when we were not present.

I did not eat octopus prior to this but I became firmly in support of encouraging everyone to avoid eating octopus and related creatures after this experience. Not only do I think they're immensely intelligent animals, I am firmly convinced that this particular specimen was smarter than a number of humans that I have met.


it's tuples all the way down

Related, here[1] is an excellent ~hour long talk by Kerry Davis of Valve about how much thought they had to put into doors for VR while working on Half Life: Alyx.

1 - https://www.youtube.com/watch?v=9kzu2Y33yKM

(edit: it looks like digipen also posted that talk themselves, and theirs doesn't have the ~10-15 minute gap the VNN one has, but theirs seems to not have the slides. Take your pick! https://www.youtube.com/watch?v=8OWjxGL8PDM0)


I’m not surprised but a little disappointed that most of the older techniques aren’t mentioned:

- checkbox/radio inputs can be used to toggle state with `:checked ~ .foo` selectors

- `:focus` (and now `:focus-within`, and `:active` though it’s less useful) can be used similarly, but also allow child selection [Edit to add: `tabindex="-1"` makes anything focusable with a pointer input, but doesn’t capture keyboard tab or iteration with assistive tools]

- `:target` can be used similarly, paired with fragment links [Edit to add: but beware history entries, this can be a poor UX]

- `<label>` can be used to not only set those states but also trigger scrolls (including within `scroll-snap` parents) without creating navigation history entries

- the `attr()` function can be used to reference server-dynamic HTML data for display with `content` in pseudo-elements

- I have to assume CSS animations are adopted widely enough that people aren’t using JS for that where it isn’t needed; but you can also use declarative, even interactive, animations in SVG

- speaking of which, inline SVG (even `<use>` references) are part of the CSS cascade, and you can change any CSS-addressable property with `currentColor`

- and you can nest HTML in SVG with `<foreignObject>` if you want to use SVG techniques in HTML

- probably not worth mentioning but in case you don’t know... if you miss table layouts, you can use them with `display` on basically anything; if you want table semantics without tabular rendering you can override `display` as well

Alllllll of that being said, if you use these techniques check your stuff with assistive technologies!


Vincent van Gogh. Born 2043, Brabant, Netherlands. Child prodigy in arts, sciences, and engineering. Graduated from Erasmus University, Rotterdam in 2055 with joint masters in particle physics and fine arts. Exhibited at the Louvre at age 14. Discovered a universal nontoxic perpetual energy source at 16 and joined staff of the Huge Monad Collider that same year as doctoral researcher in functional cosmology. There, proved that spacetime is lazily evaluated and that speed of light arises as input latency in underlying algebra. Disappeared November 23rd, 2063, leaving behind notes for a "personal temporal debugger" and a diatribe, hidden in a stencil of rats, railing against the prevailing culture of universal citizen tracing. Last sighted in background of art documentary, Exit Through The Gift Shop (2010); current whereabouts within continuum unknown, assumed to be still at large. Eartags never found.

This suggestion put me in mind of George Romero’s seminal 1978 analysis of the potential for reallocation and repurposing of large-scale commercial property, and in 2021 the notion of high-density rehousing for communities displaced by a public health crisis seems all the more relevant.

It doesn’t have to be this way, but that’s partly a matter of culture. By aspiring to present/think/act as a monoplatform, Google risks substantially increasing the blast radius of individual component failure. A global quota system mediating every other service sounds both totally on brand, and also the antithesis of everything I learned about public cloud scaling at AWS. There we made jokes, that weren’t jokes, about service teams essentially DoS’ing each other, and this being the natural order of things that every service must simply be resilient to and scale for.

Having been impressed upon by that mindset, my design reflex is instead to aim for elimination of global dependencies entirely, rather than globally rate-limiting the impact of a global rate-limiter.

I’m not saying either is a right answer, but that there are consequences to being true to your philosophy. There are upsides, too, with Google’s integrated approach, notable particularly when you build end-to-end systems from public cloud service portfolios and benefit from consistency in product design, something AWS eschews in favour of sometimes radical diversity. I see these emergent properties of each as an inevitability, a kind of generalised Conway’s Law.


Note, this is botanically incorrect. Leguminous plants, as compared to angiosperms generally (and most starkly when compared to those with pericarpal fruits), may have differential edibility: that is, beans are not the fruit; the seed pod is the fruit (i.e. the mature ovary of the flower), and beans are an edible seed contained within the pod. Some legumes with edible pods (notably phaseolus vulgaris, the "green bean" or haricot verts) are known as beans in the culinary vernacular, but this is solely by common name, and does not hold up to anatomical scrutiny.

Regular correspondents to this forum may also encounter Java beans, but be warned that their palatability is disputed, and ungoverned use, particularly in their enterprise form, can lead to buildup of toxic and irrevocable technical debt.


The problems with 23:59:59.9999 etc are aliasing and granularity issues, i.e. breaks and overlaps when used as intervals or inequalities, and the consequences may be anything from innocuous (calendar alarms) to catastrophic (financial reporting).

Firstly, users tend to write them as 23:59:59 or even 23:59. When used as a query, this can skip a second or even a minute of data.

Secondly, 00:00:00.0000 can match the first moment of tomorrow, which may also be wrong, and happens readily when timestamped data is imported from systems with per-second granularity.

Finally, these forms constrain any internal representation, which cannot now ever evaluate to 23:59:59.99995 lest we suffer the same category of fault. This'd limit a standard library's timestamp object to a precision of 10μs, which is pretty coarse for many timing needs.

The proper form, that is, the ideal mathematical representation, is an interval with a closed left/lower bound and an open right/upper bound. That's written like

    [00:00:00.0000, 00:00:00.000+1day) or equivalently

    { t | 00:00:00.0000 <= t < 00:00:00.0000+1day }
and can be pronounced "all times from and including midnight onwards, until (but strictly excluding) midnight the next day". These half-open intervals correspond advantageously to the continuously linear assumptions of chronometric time, with two properties of critical relevance: they can be recorded via commonplace machine representations of timestamps; and, they may be compared, subdivided, and concatenated without inadvertent breaks and overlaps. These qualities eliminate most aliasing & granularity concerns.

Some (sadly not all) programming languages have such a construct available in their standard library.

I think there's a paper by Lamport recommending this form, although I couldn't find it in a quick rummage through the archives.


That article doesn't even create anything. It's a comparative summary of existing practices in the manufacturing sector. The word "scrum" appears, exactly once, as part of an incoherent rugby metaphor.

Elevating a waffling HBR feature to the status of antecedent decalogue is totally on brand for the clerical formalists that promote Scrum.

In high performance teams, Agile begins where Scrum ends, and as the remarks here extensively demonstrate, awful working environments won't be improved by a bunch of ceremonies.


Apple patents are an evergreen topic for tech journalists, but Apple never announces features or products through patent filings. If you're seeing a public filing for a patent from Apple on a thing that they aren't currently shipping, that means it's something they have no plans to ship.

It should not be surprising to anyone that long-established protocol standards are capable and versatile. DNS and LDAP are robust tree-structured replicated attribute stores; mix in Kerberos for a complete foundation of general purpose directory services. NNTP supplies a straightforward Gossip protocol for eventually-consistent distributed pub/sub. MQTT is still good for lightweight telemetry. I have services I authenticate with via Lamport's mostly forgotten scheme (S/KEY). And so on.

There is a current fad of HTTP+JSON as a generic protocol substrate, but it's almost always a mediocre fit for the problem at hand. Go read the RFC archives, there's diamonds to be found.

On the other hand, and perhaps even by the same token: using EC2 metadata isn't a completely terrible idea if it's the infrastructure you already have and the semantics line up with specific needs.


There are also up, down, charmed, and strange proxies, with obvious applications in quantum computing, nonrepudiation, and dairy farming. Strange proxies were thought to be purely theoretical until an accident involving a rubber band, a liquid lunch, and a particle accelerator collided a transparent squid with varnish at relativistic velocities and the resulting core dump subsequently examined for overflows.

- Terms & conditions of use

- Chocolate soufflé

- Distributed lock manager

- Autonomous war robot

- init(1) replacement

I am personally guilty of attempting four of these things without adequate preparation or expertise.


The more experienced a developer I become, the more strongly (and negatively) I feel about nils and nulls and their ilk. I have sympathy for C.A.R.Hoare who in 2009 apologised for the apparent invention of null references in ALGOL W (1965), calling them a "billion-dollar mistake". I've come to regard them as a data singularity, and when I design data structures and interfaces today I am deliberately avoiding/outlawing them; all my relational fields are NOT NULL and I choose either meaningful defaults, or EAV or equivalents instead; in method parameters I would rather something not exist than for it to accept a null reference or value. And I believe that the resulting code is more modular, more easily refactored and more reusable a result, errors are better handled, and the resulting data structures and calling arguments more easily interpreted, more readily queried and destructured, and are (so far) proving generally better fitted to real-world domains.

Stanislaw Lem deserves a much wider readership. Well, I say that but actually he has a wide readership, just not so much in English-language markets.

Besides Solaris, his meditations on life and society in The Cyberiad remain some of my favourite science fiction of all time.


Ideally it would have some kind of anthropomorphized graphical avatar applicable to the context. Research out of Stanford[1] as far back as the '90s has suggested such interfaces as a means for improving human-computer interaction. If I was writing a letter, for example, perhaps an animated document fastener would be appropriate. In this case, why not an animated, anthropomorphic pizza that morphs into the Domino's logo as a paid-for branding.

[1] https://web.stanford.edu/group/cslipublications/cslipublicat...


My experience of LinkedIn has declined from a marginally diverting way to follow career progress of former colleagues (2005-2009) through a pointless recruiter circle-jerk (2010-2014) to being an unremitting fountain of scam sales-lead invites from profiles of dubious credibility (2015-).

I disabled all notifications long ago.

It is possible that this is the unavoidable fate of any professional-oriented social networking service. Nonetheless the value of LinkedIn to me is now effectively zero. I don't know anyone who respects their brand, and I'm left wondering if there's a gap in the market; c.f. Facebook vs Myspace ca.2008.


There was an interesting nugget in the comments under that article:

The crucial thing about this is that it is a pitch by the Tolkien estate rather than to them. Christopher Tolkien hated the Peter Jackson movies [...snip...]

I was an avid reader of Tolkien as a teenager and the movies were a disappointment to me as well. Too much spectacle and derring-do, whilst I always enjoyed the construction of the world and the inner lives & secrets of the characters. The omissions made were sometimes grotesque: the removal of Tom Bombadil alone made the mockery of the books. Skipping the entire homecoming chapter of the hero's journey ("the scouring of the shire") is a common complaint and a travesty of storytelling.

However, Christopher Tolkien is the ultimate Tolkien purist, after a lifetime of curating his father's work. In the few interviews he gives he's clearly dismayed by any deviation from the original intent. I suspect he'd likely be unhappy with any TV adaptation as well.

Like many other Tolkien fans I'd much rather see an adaptation of The Silmarillion, which if anything is the more epic book of tales.


As the anecdotes offered in these threads illustrate, it is not possible to cost or value healthcare on an individual basis.

Therefore, any market for individual purchase of healthcare (whether via insurance or directly) can only have faulty price signals.

Systems based on markets with faulty price signals are wide open to manipulation.

The perpetuation of such markets is contemptible, and particularly so in the case of healthcare since there is a straight line to be drawn from dysfunctional systems to human suffering.


Fun story: I once heard of a startup that was keeping all their data on ephemeral local storage, mirroring it between instances across AZs for "high availability", and then, er, using spot pricing for all instances to save money.

One day the spot price skyrocketed and all their databases got shot down.

And that, I presume, was the end of the company.


I once worked with a hosting provider whose battery backup was that every server was actually a second-hand thinkpad.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: