Hacker Newsnew | past | comments | ask | show | jobs | submit | 0x445442's commentslogin

The creator of Wikipedia disagrees with you.

https://larrysanger.org/2025/08/on-the-cybersecurity-subcomm...


And Peter Thiel is anagram for The Reptile.


My guess is higher run totals.


Maybe surprisingly, total plate appearances per game is relatively stable. https://www.baseball-reference.com/leagues/majors/misc.shtml.

Pitching has been dominating hitting the last few years and runs (and batting averages) are both relatively low at the moment.

Total pitchers used, however, is up.


I find this dubious since the effect she was describing is caused by resonance frequency. Since, in the example provided, the source is an amplified speaker pushing air in both cases the outcome should be the same. The more famous test of this principle is the breaking of a glass and I would be surprised if this hadn't been done with digital signal inputs.


> I find this dubious

I agree. In both cases a continuously varying voltage is driving speaker cone deflection. If the voltages of two different signals vary in precisely the same way, the cone will deflect to exactly the same degree and the resulting pressure wave will generate the same resonant response from any surface it encounters. When properly implemented, today's high-end, esoteric ADC and DAC converters have insane bandwidth, frequency response and fidelity far exceeding these requirements.

Some of the confusion comes from the fact that back when consumer audio transitioned to digital and these production workflows were new, some early digital recordings were incorrectly engineered or mastered creating artifacts such as aliasing which critical listeners could hear. Some people assumed the artifacts they heard were innate to all digital audio instead of just incorrect implementation of a new technology. Even today, it's possible to screw up the fidelity of a digital master but it's rarely an issue because workflows are standardized and modern tooling has default presets based on well-validated audio science (for example: https://en.wikipedia.org/wiki/Noise_shaping#Dithering). But even in the analog era it was always a truism in audio and video engineering that "there are infinite ways to screw up a signal but only a few ways to preserve it." And it remains true today. To me, one of the best things about modern digital tooling is it's much easier to verify correctness in the signal chain.


We used to have a very nice option called Blackberry. Oh how I miss those phones.


Dozens of microservices, oh one should be so lucky. Try hundreds where the complexity of the whole system rises non-linerarly as a function of each microservice.


There is good content on the site. But it's nearly impossible to get in a convenient way with the UI options. I've glanced at their APIs and it looks possible to build some clients that presented only what you wanted in a chronological order, filterable by accounts but it would take work.



Weak/Dynamic vs. Strong/Static typing.

I used to complain about the latter then I grew up.


I am a convert as well. I wouldn’t say that I gave or grew up but that I was arguing from ignorance. On a group project someone proposed trying TypeScript and I jumped in and completely fell in love.

My only arguments against were slower development due to an imposed build/compile step and that error reports would not reflect the source code.


Keyboard typing speed and compile times aren't really the bottle neck for the overwhelming majority of software engineers. For them these times are dwarfed by the time it takes Service Now access tickets to get fulfilled.


Keyboard typing is not a bottleneck. I always cringe when I hear people cry about long method names with those really big tears.

I did find that build times did impact my concentration during rapid experimentation or troubleshooting. When I first got into TypeScript my compile times were only 8 seconds. Then on a later dramatically larger project of around 100 files and greater than 50k lines of code my compile times got up to 30+ seconds. I switched to SWC and my compile times dropped to about 8 seconds before climbing back up to about 11 seconds. Now I am spending my time on a different project that is not as large and my compile times with SWC were about 3 seconds. I recently dumped SWC for the Node native TypeScript support and I have no compile step. It appears Node takes about 1.5 seconds to run its type stripper on my project into memory, and I am comfortable with that.

I know those sound small, but they still interrupt my concentration compared to the almost 0 time to just run a JavaScript run time. You also have to understand I am into extreme performance and can load this one app I wrote, the UI portion of an operating system, in the browser with full state restoration within 80ms of page request.


30 seconds for 100 files sounds horrible. Was this time including tests?


What were your reasons then for complaining


Oh, the typical ones, like all this type checking is stifling and slowing me down.


If you’re checking and validating inputs into a method and you’re writing web applications where everything is text over HTTP, having type checking notations, etc. are a bit overkill.


If you use a staticly typed language it does all that for you automatically.


Don’t beat yourself up, we all had that phase!


Are individual agents deployable on their own or does the entire "app" of agents need to be deployed as a single group? If individually deployable, what does this look like from a version control and a CI/CD perspective?


Conceptually, individual subtrees are deployable on their own. In practice, it generally makes sense to mark some subtrees as intended for individual deployment, creating what's known in the BEAM community (Erlang/Elixir/some other) as an "Umbrella app". Your umbrella app launches a tree of sub-apps, which are themselves subtrees. Depending on your view on monorepos, each individual sub-app could be its own repo, or just a subdivision of a single large repo. You basically take the same approach you would with microservices, but your orchestration is built into the language.


To the best of my knowledge: yes, individual parts are deployable separately, within reason. No, there explicitly no need to deploy the whole thing at once, and especially to shut it down all at once.

Erlang works by message passing and duck typing, so, as long as your interfaces are compatible (backwards or forwards), you can alter the implementation, and evolve the interfaces. Think microservices, but when every function can be a microservice, at an absolutely trivial cost.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: