Hacker Newsnew | past | comments | ask | show | jobs | submit | sjrd's commentslogin

I genuinely did that a few times. Using an ssh client to fix a commit failing CI, for example. Even launching release builds remotely. Notably once when I was on vacation and half the Scala ecosystem was waiting for me.

JavaScript engines do optimize integers. They usually represent integers up to +-2^30 as integers and apply integer operations to them. But of course that's not observable.


I think it's up to 2^53.


You are half correct about 2^53-1 being used (around 9 quadrillion). It is the largest integer representable with 64-bit float. JS even includes a `Number.MAX_SAFE_INTEGER`.

That said, these only get used in the rare cases where your number exceeds around 1 billion which is fairly rare.

JS engines use floats only when they cannot prove/speculate that a number can be an i32. They only use 31 of the 32 bits for the number itself with the last bit used for tagging. i32 takes fewer cycles to do calculations with (even with the need to deal with the tag bit) compared to f64. You fit twice as many i32 in a cache line (affects prefetching). i32 uses half the RAM (and using half the cache increases the hit rate). Finally, it takes way more energy to load two numbers into the ALU/FPU than it does to perform the calculation, so cutting the size in half also reduces power consumption. The max allowable size of a JS array is also 2^32.

JS also has BigInt available for arbitrary precision integers and these are probably what someone should be using if they expect to go over that 2^31-1 limit because hitting a number that big generally means you have something unbounded and might go over that 2^53-1 limit.


In Scala you can do it, because you can define your own operators (which are nothing but method names), and you can extend types you don't control. You are a bit constrained by the operator precedence rules, but it's usually good enough.

It's bad practice to make DSLs left and right, obviously. But when one is warranted, you can.

For example here you could have

    "x" --> "y" | "hello world"


I am one of the maintainers is the Scala compiler, and this is one of the things that immediately jump to me when I review code that contains any casing operation. Always explicitly specify the locale. However, unlike TFA and other comments, I don't suggest `Locale.US`. That's a little too US-centric. The canonical locale is in fact `Locale.ROOT`. Granted, in practice it's equivalent, but I find it a little bit more sensible.

Also, this is the last remaining major system-dependent default in Java. They made strict floating point the default in 17; UTF-8 the default encoding some versions later (21?); only the locale remains. I hope they make ROOT the default in an upcoming version.

FWIW, in the Scala.js implementation, we've been using UTF-8 and ROOT as the defaults forever.


I agree that Locale.ROOT is the canonical choice. But in this case, Locale.US also makes sense: it isn't some abstract "US is some kind of the global default", it is saying "we know are upcasing an English word".


Wouldn't the British locale make more sense then?


> However, unlike TFA and other comments, I don't suggest `Locale.US`. That's a little too US-centric. The canonical locale is in fact `Locale.ROOT`. Granted, in practice it's equivalent, but I find it a little bit more sensible.

I have no idea what `Locale.ROOT` refers to, and I'd be worried that it's accidentally the same as the system locale or something, exactly the sort of thing that will unexpectedly change when a Turkish-speaker uses a computer or what have you.


> I'd be worried that it's accidentally the same as the system locale or something

The API docs clearly specify that Locale.ROOT “is regarded as the base locale of all locales, and is used as the language/country neutral locale for the locale sensitive operations.”


> However, unlike TFA and other comments, I don't suggest `Locale.US`. That's a little too US-centric. The canonical locale is in fact `Locale.ROOT`. Granted, in practice it's equivalent, but I find it a little bit more sensible.

Isn't it kind of strange to say that Locale.US is too US centric, and therefore we'll invent a new, fictitious locale, the contents of which is all the US defaults, but which we'll call "the base locale of all locales"? That somehow seems even more US centric to me than just saying Locale.US.

Setting the locale as Locale.US is at least comprehensible at a glance.


I guess it's one way to look at it. I see it as: I want a reproducible locale, independent of the user's system. If I see US, I'm wondering if it was chosen to be English because the program was written in English. When I localize the program, should I make that locale configurable? ROOT communicates that it must not be configurable, and never dependent on the system.


I am surprised to find Java's Locale.ROOT is not American.

  DateFormat dateFormat = DateFormat.getDateInstance(DateFormat.DEFAULT, Locale.ROOT);
  System.out.println(dateFormat.format(new Date()));

  dateFormat = DateFormat.getTimeInstance(DateFormat.DEFAULT, Locale.ROOT);
   System.out.println(dateFormat.format(new Date()));

  NumberFormat numberFormatter = NumberFormat.getNumberInstance(Locale.ROOT);
  System.out.println(numberFormatter.format(12.34));

  NumberFormat currencyFormatter = NumberFormat.getCurrencyInstance(Locale.ROOT);
  System.out.println(currencyFormatter.format(12.34));

  2025 Oct 13
  10:12:42
  12.34
  ¤ 12.34
Even POSIX C is less American than I expected, with a metric paper size and no currency symbol defined (¤ isn't in ASCII). Only the American date format.


That's not the American date format, either - which would be Oct 13 2025.


I assume that Locale.ROOT will stay backwards-compatible, whereas theoretically Locale.US could change. What if it changes its currency in the future, for example, or its date format?


It is a programming language agnostic equivalent of POSIX C locale with Unicode enhancement.


Or Scala. Or Kotlin. Or any of the other languages that had most of these features years if not decades before Java. ;)


Yeah that's very much an explicit design philosophy of Java, dating way back. Let other languages experiment, and adapt what proves useful.

It hasn't worked out in terms of delivering perfect language design, but it has worked out in the sense that Java has an almost absurd degree of backward compatibility. There are libraries that have had more breaking changes this year than the Java programming language has had in the last 17 releases.


What other language made them think checked exceptions were a good idea?


They are a good idea. They solve the problem that you don't know where an exception is coming from (the "invisible control flow" complaint), and let the compiler help you to avoid mistakes when refactoring. There is zero valid reason to hate on checked exceptions.


The only problem with them is that they don’t work well with lambdas (and related features like Stream). If you need to call a method that throws a checked exception inside of a Stream, there’s no good way to pass it up other than re-throwing it as unchecked or some other hack (like collecting all the thrown exceptions in a separate loop).

A different implementation of lambdas that allow for generic exceptions would probably solve it, but then that introduces other issues with the type system.

My other complaint is that the standard library didn’t have enough pre-made exceptions to cover common usecases.


When you use lambdas you lose control over when and how often your code gets executed. Since checked exceptions are very often thrown by code that has side effects, I'd consider the friction to be a feature.

> collecting all the thrown exceptions in a separate loop

It's really not comfortable to do so in Java since there is no standard `Either` type, but this is also doable with a custom collector.


> Since checked exceptions are very often thrown by code that has side effects, I'd consider the friction to be a feature.

This is true, but I think that it’s partly true because checked exceptions are cumbersome here. In my ideal world, the majority of functions would throw exceptions, testing cases that today are either missed or thrown as unchecked exceptions.


Some inspiration came from C++, Modula-3, CLU etc. (note, inspiration, not validation of the idea)

They exist since v1, which had very different philosophy than Java of 2010s-2020s. 1990s were an interesting time in language design and software engineering. People started reflecting on the previous experiences of building software and trying to figure out how to build better, faster, with higher quality. At that time checked exceptions were untested idea: it felt wrong not to have them based on previous experience with exceptions in C++ codebases, but there were no serious arguments against them.


I assume it was the other way around, a slight twist to exceptions, only enforced by the compiler (the JVM doesn't care about checked/unchecked) probably seemed a cheap and reasonable way to implement explicit error handling. Given that Java ergonomics of the time didn't offer any convenient and performant way to return multiple values instead.


I always thought of it as a reaction to the “your program can throw anywhere for any reason due to an exception” nature of C++.

So they added checked exceptions. That way you can see that a function will only ever throw these two types of exceptions. Or maybe it never throws at all.

Of course a lot of people went really overboard early on creating a ton of different kinds of exceptions making everything a mess. Other people just got into the habit of using RuntimeExceptions for everything since they’re not checked, or the classic “throws Exception“ being added to the end of every method.

I tend to think it’s a good idea and useful. And I think a lot of people got a bad taste in their mouth early on. But if you’re going to have exceptions and you’re not going to give some better way of handling errors I think we’re probably better off than if there were no checked exceptions at all.


They are a good idea. Checked errors are so important for correctness. HN’s darling Rust exclusively uses checked errors.


Rust has panic, although intended for “unrecoverable” errors only.


Checked exceptions are a good idea.


Indeed, `let`s and `const`s incur a significant performance penalty. This is also why the Scala.js compiler emits `var`s by default, even when targeting very recent versions of ECMAScript.

The good news is that we can still write our Scala `val`s and `var`s (`const` and `let`) in the source code, enjoying good scoping and good performance.


`let`s and `const`s incur a significant performance penalty.

Is that still true? Early versions of V8 would do scope checks for things that weren't declared with var but it doesn't do that any more. I think const and let are lowered to var representation at compile time now anyway, so when the code is running they're the same thing.


I'm sure it can do that in many cases. But if the scopes are a bit complicated, and in particular when variables are captured in lambdas, it's just not possible. The semantics require the TDZ behavior. If you can statically analyze that the TDZ won't be triggered, you can lower to `var`, but otherwise you have to keep the checks.


And here I was thinking I should finally embrace using const and let for performance reasons .. shouldn't in theory the compiler has more room for optimization if it knows the variable won't be changed? Or is apperently all the scope checking more expensive?


I wonder how many companies are still using Scala.js. Scala was fun to work with, wish it was more popular these days.


Usage of Scala.js is steadily growing. Several indicators suggest that 1 in 5 Scala developers use Scala.js at this point. It's regularly brought up as one of the strongest suits of Scala.

Usage of Scala itself is less shiny if you look at market share. But I believe it's still growing in absolute numbers, only quite slowly.


>Usage of Scala.js is steadily growing. Several indicators suggest that 1 in 5 Scala developers use Scala.js at this point.

Wow! 4 out of 20 total ain't bad!

Jocking aside the starting set (Scala developers) is already small, so it's not like either it or even less Scala.js is going to be a major player anytime soon.


I'm not surprised. I have introduced several people to gaming, both adults and children. I let them all start with the default settings, and I don't even tell them there are settings. Then I observe their movements. I observe whether they consistently (or very often) start looking the wrong way before correcting. If they do that a lot, I change the settings, and it's smooth sailing from there.

So from my anecdotal perspective, explanations based on previous experience make no sense. It had to be something more innate, more related to how our brains are "wired".

Some people invert Y but not X. This is the most surprising to me. Most I've seen invert both. I don't remember having seen someone invert X but not Y.

Personally I invert both, except for games with a mouse to aim (like 3rd person shooters). In that case I invert neither. Go figure.


> Some people invert Y but not X. This is the most surprising to me. Most I've seen invert both. I don't remember having seen someone invert X but not Y.

Interesting, because I've never seen someone invert X. They either invert Y, or neither. Personally I invert Y only in flight games, anything else feels wrong to me.


If your language and its compiler use JS String Builtins (part of Wasm 3.0) for their strings, then there is no cost to give them to JS and the DOM.


Wasm 3.0, with its GC and exception support, contains everything you need. The rest is up to the source language to deal with. For example, in Scala.js [1], which is mentioned in the article, you can use the full extent of JavaScript interop to call DOM methods right from inside your Scala code. The compiler does the rest, transparently bridging what needs to be.

[1] https://www.scala-js.org/doc/project/webassembly.html


Binaryen has a lot of baggage from Wasm early days, when it was still a strict AST. Many newer features are difficult to manipulate in its model.

In our compiler (featured in TFA), we chose to define our own data structure for an abstract representation of Wasm. We then wrote two emitters: one to .wasm (the default, for speed), and one to .wat (to debug our compiler when we get it wrong). It was pretty straightforward, so I think the instruction set is quite nice. [1]

[1] https://github.com/scala-js/scala-js/tree/main/linker/shared...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: