It's funny how ideas come and go. I made this very comment here on Hacker News probably 4-5 years ago and received a few down votes for it at the time (albeit that I was thinking of computers in general).
It would take a lot of work to make a GPU do current CPU type tasks, but it would be interesting to see how it changes parallelism and our approach to logic in code.
> I made this very comment here on Hacker News probably 4-5 years ago and received a few down votes for it at the time
HN isn't always very rational about voting. It will be a loss if you judge any idea on their basis.
> It would take a lot of work to make a GPU do current CPU type tasks
In my opinion, that would be counterproductive. The advantage of GPUs is that they have a large number of very simple GPU cores. Instead, just do a few separate CPU cores on the same die, or on a separate die. Or you could even have a forest of GPU cores with a few CPU cores interspersed among them - sort of like how modern FPGAs have logic tiles, memory tiles and CPU tiles spread out on it. I doubt it would be called a GPU at that point.
GPU compute units are not that simple, the main difference with CPU is that they generally use a combination of wide SIMD and wide SMT to hide latency, as opposed to the power-intensive out-of-order processing used by CPU's. Performing tasks that can't take advantage of either SIMD or SMT on GPU compute units might be a bit wasteful.
Also you'd need to add extra hardware for various OS support functions (privilege levels, address space translation/MMU) that are currently missing from the GPU. But the idea is otherwise sound, you can think of the 'Mill' proposed CPU architecture as one variety of it.
Perhaps I should have phrased it differently. CPU and GPU cores are designed for different types of loads. The rest of your comment seems similar to what I was imagining.
Still, I don't think that enhancing the GPU cores with CPU capabilities (OOE, rings, MMU, etc from your examples) is the best idea. You may end up with the advantages of neither and the disadvantages of both. I was suggesting that you could instead have a few dedicated CPU cores distributed among the numerous GPU cores. Finding the right balance of GPU to CPU cores may be the key to achieving the best performance on such a system.
As I recall, Gartner made the outrageous claim that upwards of 70% of all computing will be “AI” in some number of years - nearly the end of cpu workloads.
I'd say over 70% of all computing is already been non-CPU for years. If you look at your typical phone or laptop SoC, the CPU is only a small part. The GPU takes the majority of area, with other accelerators also taking significant space. Manufacturers would not spend that money on silicon, if it was not already used.
> I'd say over 70% of all computing is already been non-CPU for years.
> If you look at your typical phone or laptop SoC, the CPU is only a small part.
Keep in mind that the die area doesn't always correspond to the throughput (average rate) of the computations done on it. That area may be allocated for a higher computational bandwidth (peak rate) and lower latency. Or in other words, get the results of a large number of computations faster, even if it means that the circuits idle for the rest of the cycles. I don't know the situation on mobile SoCs with regards to those quantities.
This is true, and my example was a very rough metric. But the computation density per area is actually way, way higher on GPU's compared to CPU's. CPU's only spend a tiny fraction of their area doing actual computation.
> If you look at your typical phone or laptop SoC, the CPU is only a small part
In mobile SoCs a good chunk of this is power efficiency. On a battery-powered device, there's always going to be a tradeoff to spend die area making something like 4K video playback more power efficient, versus general purpose compute
Desktop-focussed SKUs are more liable to spend a metric ton of die area on bigger caches close to your compute.
If going by raw operations done, if the given workload uses 3d rendering for UI that's probably true for computers/laptops. Watching YT video is essentially CPU pushing data between internet and GPU's video decoder, and to GPU-accelerated UI.
Looking at home computers, most of "computing" when counted as flops is done by gpus anyway, just to show more and more frames. Processors are only used to organise all that data to be crunched up by gpus. The rest is browsing webpages and running some word or excel several times a month.
Is there any need for that? Just have a few good CPUs there and you’re good to go.
As for how the HW looks like we already know. Look at Strix Halo as an example. We are just getting bigger and bigger integrated GPUs. Most of the flops on that chip is the GPU part.
HN in general is quite clueless about topics like hardware, high performance computing, graphics, and AI performance. So you probably shouldn't care if you are downvoted, especially if you honestly know you are being correct.
Also, I'd say if you buy for example a Macbook with an M4 Pro chip, it is already is a big GPU attached to a small CPU.
Yeah, I played a lot of StarCraft 2. By myself, 2v2 with a really talented friend and 3v3 with two other friends that were total beginners that I could beat 1v2.
At the bottom to upper mid level all you need to win is to figure out the macro game of building construction while also getting enough workers and units. With enough of that no micro is needed, just attack-moving into the enemy is more than enough.
Then at the upper mid level you're going to run into people who often don't build as effectively but they'll micro every unit or they'll be constantly doing raids when you don't expect it, scouting better than you and/or just understanding which units are better vs which so as to counter you.
From that point on it becomes much more of an effort to play the game because then you need to become better in all of those fields, while also becoming faster. But to be honest that point is probably 2/3rd's up the tree of all the people playing.
For those who don't know the background, she's one of the favourite daughters of an ex-president under which corruption soared to new heights. He was really pro-Russia during his time, but eventually several criminal charges against him started piling so high that the ruling party decided that he had to be replaced. He got angry and left, formed is own party and was joined by family members and some other loyalists.
A lot of other events transpired, such as him being found guilty of several things for which he had to go to prison. He did briefly go in but it wasn't to his liking, so he started pressuring all his old friends. A nation-wide unrest then erupted with mobs burning and looting all over, which is another thing for which his daughter is currently being implicated in and will probably go to trial over.
It got so bad that eventually he was pretty much just released (because of medical reasons). After that they just swiped it under the rug as if it never happened.
So back to the Russian angle, he's gone there several times for medical reasons and maybe this was some kind of test run to see if they could get a large number of men over there to fight. But as OP says, if you have to trick people into slavery then the numbers just doesn't make any sense.
There are a lot of very poor people here in South Africa, just promising them land (even if it doesn't exist) probably means that they could raise 100k or so people pretty easily, BUT the actual government isn't going to like that he's doing big moves behind their back so it might have been a test run to see what exactly they could or could not do.
There's a bit done by a comedian where they ask what the differences are between grave robbers and archaeologists, but basically it boils down to a question of time.
Those descriptions themselves would be a major archaeological find if they were preserved at all. But chances are that those detailed descriptions would be lost even if the original artifacts would have still been preserved had they not been looted.
A key fossil, journal entry, or bit of clothing that would help explain "X" is going to stay mute if sold on the black market and kept on someone's shelf. Maybe we'll get lucky and learn about it someday from the heirs - but probably not.
a. Click on a directory in my File Explorer and it opens immediately, it always shows the correct headers, sorting on any column is nearly instant (up until somewhere in XP probably)
b. Where I am now in Windows 10 sorting can take forever and because I haven't re-installed in ages it refuses to remember folder views and will constantly change them to whatever it wants
c. In the future saying
- "Winny, open folder ABC and sort it by DEF please"
- "Folder ABC deleted, except for def.txt"
- "NO, I said open it, not delete it! Get it back!"
I myself also tend to do that, but that is a behavior that is seen by the majority/"normal" people as non-social, unless if you already know them very well or if you are the one initiating the conversation.
Listening to people means that you actively listening and supporting them in their conversation, not bringing up your own angles to it. When you do that it is perceived by most people as you trying to one-up them in the conversation, instead of what you're actually doing.
In your listed example its fine because you started the interaction, but let's turn it around and say you walked into a conversation where people are talking about downtown in ABC. You want to participate and remember that there was a blizzard there in '96, so you bring that up.
Most people will see that as severe ADHD, why are we now talking about a blizzard from 1996? We were just talking about about how DEF is happening in ABC later this month?
Pivoting has the same problem, there are social cues that display your role in the group. Just walking into a conversation while trying to pivot it to your interests is in general quite rude etc.
> When you do that it is perceived by most people as you trying to one-up them in the conversation
This depends entirely on the content of your reply and how well-trained you are in social cues as well as other unspoken parts of conversation.
It's also not comprehensive advice. Of course you should first help the person on the other side of the conversation reach where they're intending to go in what they're saying.
My advice is more applicable to the "sequence points" of a conversation.
> Just walking into a conversation while trying to pivot it
Doing this would be foolish. You have to read the cues for when the time is right. You also need to develop the right conversational demeanor to pull it off. This necessitates practice.
I'm not sure I agree, when we humans only lived for 50 years there were no time to develop any range of other diseases that we are experiencing now in later life, so even if you cure A there will definitely be many many opportunities to treat anywhere from B-Z.
And that was why games back in the day came with a little key-binding tool that helped you figure out what keys could be pressed together, or not.
Of course back then it was a different hardware interface and the problem was usually the buffer size, arrow/keypad keys had much longer codes and locked up a lot more often.
I googled a bit and I'm not sure I would follow the new advice simply because it totally depends on getting help to you fast enough such that they can determine if its a heart attack or something else.
In the writer's case that help never came, so personally if I had to choose I'd rather go with the risk of guessing the symptoms wrong and making things some percentage worse vs a possible death.
It would take a lot of work to make a GPU do current CPU type tasks, but it would be interesting to see how it changes parallelism and our approach to logic in code.
reply