Hacker Newsnew | past | comments | ask | show | jobs | submit | intothemild's commentslogin

You're right. Trump isn't Hitler...

Hitler's dead.


[flagged]


I envy your willingness to debate here. It's a pretty hostile place for a position you're defending.

100%. It goes to show you that intelligence is not always deployed with integrity.

I cannot understand the idea of using JavaScript that compiles into "native code".

At what point do JavaScript developers need to realise that this is all convoluted, and begin to use languages better suited for the job.

You want something that can compile into a binary with multiple architectures, multithreading, types, etc? Please use a different language that's built from the ground up to achieve that.

You want something that was designed to add some sugar onto a website? Then yes JavaScript is probably best there.

Not everything needs to be written in this one language that's not designed for it. Just because you can, doesn't mean you should.


Actually TypeScript is an excellent language (in my view) for targeting native code. It reads cleaner than Java, C# and even golang in many cases - at least to me.

For example, JS/TS's file-path based imports are more intuitive; several languages do it via explicit namespaces when well-written code is already organized into directories. Of course, all of these design choices are subjective. In fact, disagreement with a few people in the C# user community is one of the reasons I started this project.

Another example - top level functions, being able to export them trivially etc.

> At what point do JavaScript developers need to realise that this is all convoluted, and begin to use languages better suited for the job.

I'd like to know what makes TS convoluted. Here's an example of multi-threading: https://github.com/tsoniclang/proof-is-in-the-pudding/blob/m...


The comment was especially about TypeScript. Unlike the ancient JavaScript versions best used for web sugar you're possibly thinking of, it's a highly pragmatic and well designed general purpose programming language with an unique and very powerful type system.

I guess it's because of the already available packages

Typescript is an excellent language, though. I don't blame people for wanting to hang on to it.

If I remember correctly. The system got broken into trivially. There was supposed to be some random value. But for some reason it was always the same value. 7 or something.

Nobody tried to hack it, everyone assumed it was impossible. But when they removed Linux, then people tried, and it was broken very quickly.


> what problem this actually solves.

Exactly. Meta also decided to do some changes, from what I understand they cut staff, and have signalled the hardware will live longer between models. They also seem to signal that the next model will prioritise gaming. Which seems the most logical choice for VR/AR.

Spacial computing, just doesn't seem like something that's solving a problem.


Just to add to this, I would personally use ABS/ASA, or a fibre variant of those, like Carbon or Glass (if you don't have an enclosed printer). PA (like PA612) is also a good option. Basically anything that can handle higher temps.

PETG starts to deform at ~75-85'c. The upper end of that should be fine, but the lower end.. certain things can get close to that temp. So if you've got good airflow, and nothing is passively getting to that temps you're probably good.

The best thing might be a design that has some core that holds the components.. print that in a higher temp filament, and print the outer shells in something a little more aesthetically pleasing.

Also.. remember to check if the filament you're using has any electrical conductivity.


Yes, I would have loved to use a 'better' material, but my printer is not enclosed and I'm not sure it can print CF filaments (I think I would need to change the hot end at least). But soon I will hopefully get my hands on a new BambuLab printer which should let me play with those materials!


Actually you can. You'd need a hardened steel nozzle and that's about it. CF/GF filaments are designed to be easier to print, even on an open printer. It's almost the main reason they're popular.


Yep, you nailed it.


Or specially.. stopping at season 2 of this show.


In some ways, Firefly being canceled was the best thing that ever happened to it.


This is the way.


even better


I can't tell if this is sarcasm or not.

They didn't have this kind of compute back when the article was written. Which is the point in the article.


The article was written exactly because they had machines capable enough at the time. But the software worked against it on every level.


I mean, yes and no. It was a software challenge to hit the hardware limit, but the hardware limits were also much lower. My team stopped optimizing when we maxed out the PCI bus in ~2001.


I don't see how you could have read the article and come to this conclusion. The first few sentences of the article even go into detail about how a cheap $1200 consumer grade computer should be able to handle 10,000 concurrent connections with ease. It's literally the entire focus of the second paragraph.

2003 might seem like ancient history, but computers back then absolutely could handle 10,000 concurrent connections.


In spring 2005 Azul introduced a 24 core machine tuned for Java. A couple years later they were at 48 and then jumped to an obscene 768 cores which seemed like such an imaginary number at the time that small companies didn’t really poke them to see what the prices were like. Like it was a typo.


Before clusters with fast interconnects were a thing, there were quite a few systems that had more than a thousand hardware threads: https://linuxdevices.org/worlds-largest-single-kernel-linux-...

We're slowly getting back to similarly-sized systems. IBM now has POWER systems with more than 1,500 threads (although I assume those are SMT8 configurations). This is a bit annoying because too many programs assume that the CPU mask fits into 128 bytes, which limits the CPU (hardware thread) count to 1,024. We fixed a few of these bugs twenty years ago, but as these systems fell out of use, similar problems are back.


> Driven by 1,024 Dual-Core Intel Itanium 2 processors, the new system will generate 13.1 TFLOPs (Teraflops, or trillions of calculations per second) of compute power.

This is equal to the combined single precision GPU and CPU horsepower of a modern MacBook [1]. Really makes you think about how resource-intensive even the simplest of modern software is...

[1] https://www.cpu-monkey.com/en/igpu-apple_m4_10_core


Note that those 13.1 TFLOPs are FP64, which isn't supported natively on the MacBook GPU. On the other hand, local/per-node memory bandwidth is significantly higher on the MacBook. (Apparently, SGI Altix only had 8.5 to 12.8 GB/s.) Total memory bandwidth on larger Altix systems was of course much higher due to the ridiculous node count. Access to remote memory on other nodes could be quite slow because it had to go through multiple router hops.


My Apple Watch can blow the doors off a Cray 1. It’s crazy.


Half serious. I guess what Iwas saying is that it is that kind of science which is still very useful but more to nginx developers themselves. And most users now dont have to worry about this anymore.

Should have prefixed my comment wirh "nowadays"


This doesn't add anything to the discussion.

Also someone whose name is Earl (norsk-germanic for King) and King.. you seem to be hell bent on an anti-european slant.. whilst having a pretty European name.


I think you'll find that people who are against algo-feeds are against that being the only choice.


Personally speaking I think the issue is personalized algorithmic feeds.

I want the algorithm to analyze spammers' behavior and filter them out for everyone. Not analyzing my behaviors to filter content for me.


Yes! exactly


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: