Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Computers Are Fast (thundergolfer.com)
3 points by thunderbong on July 13, 2024 | hide | past | favorite | 1 comment


The variability is a bit too high for this test to make sense.

Consider the first Python example:

  for _ in range(n):
    pass
On my Mac using Python 3.12 I get a factor of two difference depending on if the example is run as written, or if it's in a function:

  >>> import time
  >>> if 1:
  ...   t1 = time.time()
  ...   for _ in range(100_000_000):
  ...     pass
  ...   print(time.time() - t1)
  ...
  2.442094087600708
  >>> def f():
  ...   t1 = time.time()
  ...   for _ in range(100_000_000):
  ...     pass
  ...   print(time.time() - t1)
  ...
  >>> f()
  1.2900230884552002
  >>> f'{100_000_000 / 2.44:,}'
  '40,983,606.55737705'
  >>> f'{100_000_000 / 1.29:,}'
  '77,519,379.84496124'
If it's within an order of magnitude, then 10M and 100M should be fine.

But the test author measured 113 million/second, which is 3x higher than what I get using the test as-written. And because it's quantized to powers of 10, only 100M and 1G are considered correct.

That higher number is because the actual timing, which I verified by looking in the repo, measures the loop performance in a function, which is not what the question asked.

Since 113M is near the power-of-10 boundary used for scoring, and the benchmark machine appears to be faster than mine, and the benchmark uses an easier-to-optimize situation than shown, my answer of 10M was deemed incorrect, even though 40M is within an order of magnitude of 113M.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: