Hacker Newsnew | past | comments | ask | show | jobs | submit | amiga386's commentslogin

Perl's binary brings with it the ability to run every release of the language, from 5.8 onwards. You can mix and match Perl 5.30 code with 5.8 code with 5.20 code, whatever, just say "use v5.20.0;" at the start of each module or script.

By comparison, Python can barely go one version without both introducing new things and removing old things from the language, so anything written in Python is only safe for a a fragile, narrow window of versions, and anything written for it needs to keep being updated just to stay where it is.

Python interpreter: if you can tell "print" is being used as a keyword rather than a function call, in order to scold the programmer for doing that, you can equally just perform the function call.


> By comparison, Python can barely go one version without both introducing new things and removing old things from the language

Overwhelmingly, what gets removed is from the standard library, and it's extremely old stuff. As recently as 3.11 you could use `distutils` (the predecessor to Setuptools). And in 3.12 you could still use `pipes` (a predecessor to `subprocess` that nobody ever talked about even when `subprocess` was new; `subprocess` was viewed as directly replacing DIY with `os.system` and the `os.exec` family). And `sunau`. And `telnetlib`.

Can you show me a real-world package that was held back because the code needed a feature or semantics from the interpreter* of a 3.x Python version that was going EOL?

> Python interpreter: if you can tell "print" is being used as a keyword rather than a function call, in order to scold the programmer for doing that, you can equally just perform the function call.

No, that doesn't work because the statement form has radically different semantics. You'd need to keep the entire grammar for it (and decide what to do if someone tries to embed a "print statement" in a larger expression). Plus the function calls can usually be parsed as the statement form with entirely permissible parentheses, so you have to decide whether a file that uses the statement should switch everything over to the legacy parsing. Plus the function call affords syntax that doesn't work with the original statement form, so you have to decide whether to accept those as well, or else how to report the error. Plus in 2.7, surrounding parentheses are not redundant, and change the meaning:

  $ py2.7 
  Python 2.7.18 (default, Feb 20 2025, 09:47:11) 
  [GCC 13.3.0] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> print('foo', 'bar')
  ('foo', 'bar')
  >>> print 'foo', 'bar'
  foo bar
The incompatible bytes/string handling is also a fundamental shift. You would at least need a pragma.

> Can you show me a real-world package that was held back because the code needed a feature or semantics from the interpreter

That is not what I was getting at. What I was saying is that, if you write code for perl 5.20 and mark it "use 5.20.0;", then that's it, you're done, code never needs to change again. You can bring in newer perl interpreters, you can upgrade, it's almost certainly not going to break.

You can even write new code down the line in Perl 5.32 which wouldn't be possible in 5.20, and the 5.20 code wouldn't be valid in 5.32, but as they're both declaring which version of the language they're written in, they just seamlessly work together in the same interpreter.

Compared to Python's deliberate policy, which is they won't guarantee your code will still run after two minor releases, and they have a habit of actively removing things, and there's only one version the interpreter implements and all code in the same interpreter has to be be compatible with that version... it means a continual stream of having to update code just so it still runs. And you don't know what they're going to deprecate or remove until they do it, so it's not possible to write anything futureproof.

> in 2.7, surrounding parentheses are not redundant,

That is interesting, I wasn't aware of that. And indeed that would be a thorny problem, moreso than keeping a print statement in the grammar.

Fun fact: the parentheses for all function calls are redundant in perl. It also flattens plain arrays and does not have some mad tuple-list distinction. These are all the same call to the foo subroutine:

    foo "bar", "baz"
    foo ("bar", "baz")
    foo (("bar", "baz"))
    foo (("bar"), "baz")
    foo (((("bar")), "baz"))

> Compared to Python's deliberate policy, which is they won't guarantee your code will still run after two minor releases

They don't guarantee that the entire standard library will be available to you two minor releases hence. Your code will still run if you just vendor those pieces (and thanks to how `sys.path` works, and the fact that the standard library was never namespaced, shadowing the standard library is trivial). And they tell you up front what will be removed. It is not because of a runtime change that anything breaks here.

Python 3 has essentially prevented any risk of semantic changes or syntax errors in older but 3.x-compatible code. That's what the `__future__` system is about. The only future feature that has become mandatory is `generator_stop` since 3.7 (see https://peps.python.org/pep-0479/), which is very much a corner case anyway. In particular, the 3.7-onward annotations system will not become mandatory, because it's being replaced by the 3.14-onward system (https://peps.python.org/pep-0649/). And aside from that again the only issue I'm aware of (or at least can think of at the moment) is the async-keyword one.

> And you don't know what they're going to deprecate or remove until they do it

This is simply untrue. Deprecation plans are discussed in public and now that they've been burned a few times, removal is scheduled up front (although it can happen that someone gives a compelling reason to undo the deprecation).

It's true that you can't make your own code, using the standard library (which is practically impossible to avoid), forwards-compatible to future standard libraries indefinitely. But that's just a matter of what other code you're pulling in, when you didn't write it in the first place. Vendoring is always an option. So are compatibility "forward-ports" like https://github.com/youknowone/python-deadlib. And in practice your users are expecting you to put out updates anyway.

And most of them are expecting to update their local Python installations eventually, because the core Python team won't support those forever, either. If you want to use old FOSS you'll have to accept that support resources are limited. (Not to mention all the other bitrot issues.)


asyncio.get_event_loop ?

I seem to have messed up my italics. The emphasis was supposed to be on "from the interpreter". asyncio.get_event_loop is a standard library function.

Well isn't that nice. The boxes I care most about are 32 bit. The perl I use is 5.0 circa 2008. May you amiga386, or anyone else, thank you in advance, may be able to tell me what do I need to upgrade to perl 5.8? Is it only perl 5.8 and whatever is the contemporaneous gcc? Will the rest of my Suse 11.1 circa 2008, crunch? May I have two gcc's on the same box/distro version, and give the path to the later one when I need it? The reason I am still with Suse 11.1, is later releases broke other earlier things I care about, and I could not fix.

suse 11.1 includes perl 5.10: https://ftp5.gwdg.de/pub/opensuse/discontinued/distribution/...

perl 5.8 was released in 2002. perl 5.000 (there is no perl 5.0) was released in 1994. I have difficulty accepting you have perl "5.0" installed.


I assumed 5.0 was referring to using any of the 5.00x series.

The Python approach seems better for avoiding subtle bugs. TIMTOWTDI vs "there should be one obvious way to do it" again.

I was already using Tor Browser for sites that UK ISPs are banned from letting me access.

I continue to use Tor Browser for entirely innocuous sites that are collateral damage of the OSA.

For example, the Interactive Fiction Archive. All its game files are voluntarily blocked in the UK by its well-meaning but stupid operators. Even games intended for children. They should stop complying and just serve up all their files to everyone. If a teenager learns what a. z5 file even is, they deserve to be able to play it.

Any reddit thread where someone said naughty words? "Oh we're going to need your phone number and a facial". I don't think so, Mr Data Harvester. Click on URL, Ctrl+c, alt-tab to Tor Browser, Ctrl+v, "Are you over 18?" Yes I am. See how easy that is?

I hate my government.


I use Orbot on Android - which provides a Tor connection. I'm not sure why people are paying for VPNs for the small number of sites that are blocked in the UK.

Sadly Orbot is not working from China.

Not only your government. The Conservatives who proposed it. Labour who provided no opposition. But most importantly Ofcom, who comprehensively failed to implement a competent and reasonable solution.

Everyone could have done a lot better, and could have achieved the stated aims without so much damage.


Labour actively supported it and still do. I got this from my Labour MP:

The UK has a strong tradition of safeguarding privacy while ensuring that appropriate action can be taken against criminals, such as child sexual abusers and terrorists. I firmly believe that privacy and security are not mutually exclusive—we can and must have both.

The Investigatory Powers Act governs how and when data can be requested by law enforcement and other relevant agencies. It includes robust safeguards and independent oversight to protect privacy, ensuring that data is accessed only in exceptional cases and only when necessary and proportionate.

The suggestion that cybersecurity and access to data by law enforcement are at odds is false. It is possible for online platforms to have strong cybersecurity measures whilst also ensuring that criminal activities can be detected.


> in some part of europe, we have national healthcare so basically people don't think they are paying their medications, like there was some magic money.

Europe is a big place, buddy. Which particular part are "we" from today?

NHS England has NICE (National Institute for Health and Care Excellence), which does the cost-benefit analysis for all medicines prescribed, nationally. It frequently decides medicines aren't worth the money. If you, as a private citizen, want that particular medicine, you can waste your own money on it. NHS England does not have a moral hazard problem.

The NHS also spends money trying to convince people to exercise, eat well, lose weight, not smoke, look for early signs of cancer, etc., because they find that relatively tiny amounts of money on these campaigns results in massive, massive savings from not having to treat so much preventable disease later in life.


> And in the case of many others, it makes a very significant difference.

This is very true, but we're talking about an entertainment provider's choice of codec for streaming to millions of subscribers.

A security recording device's choice of codec ought to be very different, perhaps even regulated to exclude codecs which could "hallucinate" high-definition detail not present in the raw camera data, and the limitations of the recording media need to be understood by law enforcement. We've had similar problems since the introduction of tape recorders, VHS and so on, they always need to be worked out. Even the phantom of Helibronn (https://en.wikipedia.org/wiki/Phantom_of_Heilbronn) turned out to be DNA contamination of swabs by someone who worked for the swab manufacturer.


I don't understand why it needs to be a part of the codec. Can't Netflix use relatively low bitrate/resolution AV1 and then use AI to upscale or add back detail in the player? Why is this something we want to do in the codec and therefore set in stone with standard bodies and hardware implementations?

We're currently indulging a hypothetical, the idea of AI being used to either improve the quality of streamed video, or provide the same quality with a lower bitrate, so the focus is what would both ends of the codec could agree on.

The coding side of "codec" needs to know what the decoding side would add back in (the hypothetical AI upscaling), so it knows where it can skimp and get a good "AI" result anyway, versus where it has to be generous in allocating bits because the "AI" hallucinates too badly to meet the quality requirements. You'd also want it specified, so that any encoding displays the same on any decoder, and you'd want it in hardware as most devices that display video rely on dedicated decoders to play it at full frame rate and/or not drain their battery. It it's not in hardware, it's not going to be adopted. It is possible to have different encodings, so a "baseline" encoding could leave out the AI upscaler, at the cost of needing a higher bitrate to maintain quality, or switching to a lower quality if bitrate isn't there.

Separating out codec from upscaler, and having a deliberately low-resolution / low-bitrate stream be naively "AI upscaled" would, IMHO, look like shit. It's already a trend in computer games to render at lower resolution and have dedicated graphics card hardware "AI upscale" (DLSS, FSR, XeSS, PSSR), because 4k resolutions are just too much work to render modern graphics consistently at 60fps. But the result, IMHO, noticibly and distractingly glitches and errors all the time.


> how do you switch DNS over when your DNS provider is down?

You list multiple nameservers.

    yoursite.com. 86400 IN NS ns1.yourprovider.com.
    yoursite.com. 86400 IN NS ns2.yourprovider.com.
    yoursite.com. 86400 IN NS ns1.yourotherprovider.com.
    yoursite.com. 86400 IN NS ns2.yourotherprovider.com.

In the chain of events that led to Cloudflare's largest ever outage, code they'd rewritten from C to Rust was significant factor. There are, of course, other factors that meant the Rust-based problem was not mitigated.

They expected a maximum config size but an upstream error meant it was much larger than normal. Their Rust code parsed a fraction of the config, then did ".unwrap()" and panicked, crashing the entire program.

This validated a number of things that programmers say in response to Rust advocates who relentlessly badger people in pursuit of mindshare and adoption:

* memory errors are not the only category of errors, or security flaws. A language claiming magic bullets for one thing might be nonetheless be worse at another thing.

* there is no guarantee that if you write in <latest hyped language> your code will have fewer errors. If anything, you'll add new errors during the rewrite

* Rust has footguns like any other language. If it gains common adoption, there will be doofus programmers using it too, just like the other languages. What will the errors of Rust doofuses look like, compared to C, C++, C#, Java, JavaScript, Python, Ruby, etc. doofuses?

* availability is orthagonal to security. While there is a huge interest in remaining secure, if you design for "and it remains secure because it stops as soon as there's an error", have you considered what negative effects a widespread outage would cause?


This is generally BS apologetics for C. If that was in C this would have just been overrunning the statically allocated memory amount and would have resulted in a segfault.

Rust did its job and forced them to return an error from the lower function. They explicitly called a function to crash if that returned an error.

That’s not a rust problem.


We don't know how the C program would have coped. It could equally have ignored the extra config once it reached its maximum, which would cause new problems but not necessarily cause an outage. It could've returned an error and safely shut down the whole program (which would result in the same problem as Rust panicking).

What we do know is Cloudflare wrote a new program in Rust, and never tested their Rust program with too many config items.

You can't say "Rust did its job" and blame the programmer, any more than I can say "C did its job" when a programmer tells it to write to the 257th index of a 256 byte array, or "Java did its job" when some deeply buried function throws a RuntimeException, or "Python did its job" when it crashes a service that has been running for years because for the first time someone created a file whose name wasn't valid UTF-8.

Footguns are universal. Every language has them, including Rust.

You have to own the total solution, no matter which language you pick. Switching languages does not absolve you of this. TANSTAAFL.


> You can't say "Rust did its job" and blame the programmer,

You absolutely can. This is someone just calling panic in an error branch. Rust didn’t overrun the memory which would have been a real possibility here in C.

The whole point is that C could have failed in the exact same way but it would have taken extra effort to even get it to detect the issue an exit. For an error the programmer didn’t intend to handle like in this case, it likely would have just segfaulted because they wouldn’t bother to bounds check.

> TANSTAAFL

The way C could have failed here is a superset of how Rust would. Rust absolutely gives you free lunch, you just have to eat it.


I find the same too. I find gcc and clang can inline functions, but can't decide to break apart a struct used only among those inlined functions and make every struct member a local variable, and then decide that one or more of those local variables should be allocated as a register for the full lifetime of the function, rather than spill onto the local stack.

So if you use a messy solution where something that should be a struct and operated on with functions, is actually just a pile of local variables within a single function, and you use macros operating on local variables instead of inlineable functions operating on structs, you get massively better performance.

e.g.

    /* slower */
    struct foo { uint32_t a,b,c,d,e,f,g,h; }
    uint32_t do_thing(struct foo *foo) {
        return foo->a ^ foo->b ^ foo->c ^ foo->d;
    }
    void blah() {
        struct foo x;
        for (...) {
            x.e = do_thing(&x) ^ x.f;
            ...
        }
    }

    /* faster */
    #define DO_THING (a^b^c^d)
    void blah() {
        uint32_t a,b,c,d,e,f,g,h;
        for (...) {
            e = DO_THING ^ f;
            ...
        }
    }

The nice thing about godbolt is that it can show you that clang not only can but do it in theory but also does it in practice:

https://aoco.compiler-explorer.com/#g:!((g:!((g:!((h:codeEdi...

The ability of turning stack allocated variables into locals(which can be then put in registers) is one of the most important passes of modern compilers.

Since compilers use SSA, where locals are immutable while lots of languages, like C have mutable variables, some compiler frontends put locals onto the stack, and let the compiler figure out what can be put into locals and how.


That's really good; clearly I haven't looked at more recent versions. The magic seems to happen in your link at SROAPass, "Scalar Replacement Of Aggregates". Very cool!

According to https://docs.hdoc.io/hdoc/llvm-project/r2E8025E445BE9CEE.htm...

> This pass takes allocations which can be completely analyzed (that is, they don't escape) and tries to turn them into scalar SSA values.

That's actually a useful hint to me. When I was trying to replace locals and macros with a struct and functions, I also used the struct directly in another struct (which was the wider source of persistence across functions), so perhaps this pass thought the struct _did_ escape. I should revisit my code and see if I can tweak it to get this optimisation applied.


I guess the chances of the compiler doing something smart increases with link-time optimizations and when keeping as much as possible inside the same "compilation unit". (In practice in the same source file.)

Let us see if Replicate and Cog are shut down, and it becomes an Incredible Journey: https://ourincrediblejourney.tumblr.com/post/89180616013/wha...

> An incredible journey is: One company buying another and closing its services down. This is a purchase of the second company’s staff, rather than their product. An acquihire.

> This is what is galling. A company that can afford to pay millions for some new staff but not for what those staff built. The people who used the service, and invested their belief and time in uploading photos, or forming friendships, or logging data, are left to find new virtual homes while their former hosts enjoy a nice (if possibly delayed) payday.

> This repeated pattern only encourages more people to create flashy services that have no hope of being sustainable businesses in their own right, but may survive long enough, with VC funding, to attract the attention of a large company eager for new ideas and staff.

The last paragraph is what gets me -- it makes sense to me found startups in hopes to be acquired (continue their work with the support of a big company), but founding with the intention to abandon your users? Yuck.


>But what I also dislike is cheaters in my online games and I genuinely do not have a better suggestion on what to do.

You can't suggest "run online games as close-knit social groups, with social exclusion punishments for cheaters", which is how most online games used to be run. How old are you?

Game vendors used to be happy letting us host and run our own multiplayer games, until they realised they could get more money out of us -- "battle passes", microtransactions, ability to forcibly turn off multiplayer of older game when newer remake comes out -- and now they've made themselves a mandatory part of your online experience. You have to use their matchmaking and their servers. So now it's down to them to solve the problem of cheaters, enabled by their centralised matchmaking... and their only solution is remote attestation of your machine and yet more data collection?


Ironically, X11 supports "diverse" people better than Wayland does.

https://fireborn.mataroa.blog/blog/i-want-to-love-linux-it-d...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: