Hacker Newsnew | past | comments | ask | show | jobs | submit | Flow's commentslogin

Looks perfect here. iOS Safari

StackOverflow didn't feel like a welcoming and humane place the last 10+ years, at least for me.

Actually I think it never did.

It started when I was new there and couldn't write answers, just write comments and then got blasted for writing answer-like comments as comments. What was I supposed to do? I engaged less and less and finally asked them to remove my account.

And then it seems like the power-users/moderators just took over and made it even more hostile.

I hope Wikipedia doesn't end up like this despite some similarities.


I don't think the reputation system ever worked that way - new users could always answer questions, but comments required more reputation.


OK, you might be right and I got it backwards. It still felt wrong at the time before I got enough points.


I don't get why Nvidia can't do both? Is it because of the limited production capabilities of the factories?


Yes. If you're bottlenecked on silicon and secondaries like memory, why would you want to put more of those resources into lower margin consumer products if you could use those very resources to make and sell more high margin AI accelerators instead?

From a business standpoint, it makes some sense to throttle the gaming supply some. Not to the point of surrendering the market to someone else probably, but to a measurable degree.


We will have to wait and see but my bet is that Nvidia will move to Leading Edge node N2 earlier now they have the Margin to work with. Both Hopper and Blackwell were too late in the design cycle. The AI hype and continue to buy the latest and great leaving Gaming at a mainstream node.

Nvidia using Mainstream node has always been the norm considering most Fab capacity always goes to Mobile SoC first. But I expect the internet / gamers will be angry anyway because Nvidia does not provide them with the latest and greatest.

In reality the extra R&D cost for designing with leading edge will be amortised by all the AI order which give Nvidia competitive advantage at the consumer level when they compete. That is assuming there are competition because most recent data have shown Nvidia owning 90%+ of discreet market share, 9% for AMD and 1% for Intel.


Ever thought of writing an emulator? On Reddit theres /r/EmuDev which is a nice place.

For example you could start by writing a CHIP8 emu, then a Space Invaders Emu. After Space Invaders most people write a Game Boy(almost same CPU as Space Invaders and hardware is well documented) emu, but you could try to do a 8086 PC if you want to know more about "real" computers.

There are free BIOS you can use, and FreeDOS, and then rest of the machine is pretty well documented.


I started a SvelteKit project last year.

The compiler and preprocessors are very picky about accessibility issues like these.

It was annoying in the beginning, but I tried to follow the rules it set out. It feels very professional, and I’m thankful for the guidance.

What’s annoying now is when my coworkers thinks I’m making things unnecessarily complicated by trying to follow these rules and guidelines. A <div> with an onclick is so easy compared to a <button> with extra styling and sometimes calling preventDefault().


I have that experience when working with eslint-angular. Quite a few things I realised I hadn't been doing properly before.


Funny, and the persistent memory-image everyone outside the Smalltalk/Lisp community seems to hate, it's your normal filesystem now.


The irony is that anyone using IDEs is using the same idea, as all of them use a virtual filesystem layer to simulate the same capabilities as the image approach.


Not really, because you still recompile and start the program from scratch, rather than modifying the code that executes on the still-existing data structures.

Edit: rather than just naysaying, it occurred to me to reference the notion of Orthogonal Persistence, which the image-based approach provides (not without drawbacks) but IDEs typically don’t. Previous HN discussion: https://news.ycombinator.com/item?id=39615228


Kind of, see Cadilac model for Energize C++, born out of Lucid Lisp.

https://dreamsongs.com/Cadillac.html


Did you ever play Nebulus? The ZX Spectrum version looks as good as the C64 version, just less colors. Nebulus is from 1988.

https://www.youtube.com/watch?v=ZAud8w5mTa4


I doubt that. It takes more or less 100% CPU on the scanlines affected.


This was damn cool. Watching and listening to it I wonder what is the hardest. Producing video with a sound chip or producing audio with a video chip. Fun stuff.


After reading the article it's clear that the former is much more difficult, because video needs much higher bandwidths than the sound chips are designed to produce, and the hardware even contains extra hurdles like a bandpass that filters out higher frequencies even if you manage to hack the chip into producing them.

That's why the video basically happens in only one dimension instead of two.


I like good pixel dithering. The C64 demo scene has become really good at it. Just look at the girl picture in the frozen start pic. That shows really good taste in picking colors from the weird palette too.

https://www.youtube.com/watch?v=P5E6z7AMxIU


Is that real-time dithering the demo authors implemented though?

It seems like a static image they could use any graphics program to produce...


This would've been meticulously dithered by-hand by the artist to achieve a particular look and the quality of that is what gp was praising iiuc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: