Let’s be honest here it was never more than a band aid thrown together in an attempt to keep up with chiplets. Intel is in serious trouble because they still cannot compete with AMD in that regard, it affords them a level of production scalability Intel can currently only dream of.
That’s not entirely true, Intel’s latest laptop chips are more advanced than AMD’s in some regards, specifically when it comes to dividing different workloads amongst different chiplets. But that hasn’t led to chips that are actually better for the users yet. On the desktop they still have a long way to go, that still holds true.
Absolutely. Strix Point is great but it’s just a monolithic chip, no chiplets are used. Intel’s Meteor Lake and Arrow Lake use all kinds of different chiplets called tiles, separate ones for compute, GPU, SoC (with RAM controllers, display driver and a few ultra low power E cores so that compute tiles can be completely switched off at idle) and IO tiles. Different tiles are produced on different node sizes to optimize for cost and performance as needed.
On paper they’re very impressive designs, but it hasn’t translated to chips that are actually faster or more efficient than AMD’s offerings. I’d always choose AMD for a laptop currently, so even with all that impressive tech Intel is still lagging behind.
Basically every one of them made in the past 4 or 5 years?
Some are better than others - CP2077, for example, will happily use all 16 threads on my 7700x, but something crusty like WoW only uses like, 4. Fortnite is. 3 or so, unless you’re doing shader compilation where it’ll use all of them, and so on - but it’s not 2002 anymore.
The issue is that most games won’t use nearly as many cores as Intel is stuffing on a die these days, which means for gaming having 32 threads via e-cores or whatever is utterly pointless, but having 8 cores and 16 threads of full-fat cores is very much useful.
Many games use multiple threads, but they don’t do so very effectively.
The vast majority of games use Unreal or Unity, and those engines (as products) are optimized to make the developer experience easy - notably NOT to make the end product performant.
It is pretty common that there is one big thread that handles rendering, and another for most game logic. This is how Unreal does it ‘out of the box’. It also splits the physics calculations off into multiple threads semi-automatically, and the standard default setup will have render and game logic on separate threads.
Having a lot of moving characters around is taxing because all the animation states have to go through the main thread that is also doing pathfinding for all the characters and any AI scripts that are running… often you can’t completely separate these things since where a character wants to move may determine whether they walk/run/jump/fly/swim and those need different animations.
This often leads to the scenario where someone with an older 8+ core chip is wondering why the game is stuttering when ‘it is only using 10% of my cpu’ - because the render thread or game logic thread is stuffed and is pinning one core/thread at 100%.
Effective concurrency requires designing for it very early, and most games are built in iterative refinements with the scope and feature list constantly changing - not conducive to solving the big CS problem of splitting each frame’s calculations into independent chunks.
The concept is used by pretty much all games now. It’s just that during the gilded days of Intel everbody and their mother hardcoded around a max of 8 threads. Now that core counts are significantly higher game devs opt for dynamic threading instead of fixed threading, which results in Intels imbalanced Core performance turning into more and more of a detriment. Doom Eternal for example uses up as many threads as you have available and uses them pretty evenly
Let’s be honest here it was never more than a band aid thrown together in an attempt to keep up with chiplets. Intel is in serious trouble because they still cannot compete with AMD in that regard, it affords them a level of production scalability Intel can currently only dream of.
That’s not entirely true, Intel’s latest laptop chips are more advanced than AMD’s in some regards, specifically when it comes to dividing different workloads amongst different chiplets. But that hasn’t led to chips that are actually better for the users yet. On the desktop they still have a long way to go, that still holds true.
Would you happen to be including AMD’s new strix point Mobile cpu in that comparison? They seem to be at the very top for mobile CPUs currently.
If you were including those, what workloads is Intel still better at?
Absolutely. Strix Point is great but it’s just a monolithic chip, no chiplets are used. Intel’s Meteor Lake and Arrow Lake use all kinds of different chiplets called tiles, separate ones for compute, GPU, SoC (with RAM controllers, display driver and a few ultra low power E cores so that compute tiles can be completely switched off at idle) and IO tiles. Different tiles are produced on different node sizes to optimize for cost and performance as needed.
On paper they’re very impressive designs, but it hasn’t translated to chips that are actually faster or more efficient than AMD’s offerings. I’d always choose AMD for a laptop currently, so even with all that impressive tech Intel is still lagging behind.
Oh wow, I didn’t realize strix was monolithic. I just assumed it was multi die due to the Zen5c cores.
I’d be curious what games actually utilize multithreading
Basically every one of them made in the past 4 or 5 years?
Some are better than others - CP2077, for example, will happily use all 16 threads on my 7700x, but something crusty like WoW only uses like, 4. Fortnite is. 3 or so, unless you’re doing shader compilation where it’ll use all of them, and so on - but it’s not 2002 anymore.
The issue is that most games won’t use nearly as many cores as Intel is stuffing on a die these days, which means for gaming having 32 threads via e-cores or whatever is utterly pointless, but having 8 cores and 16 threads of full-fat cores is very much useful.
Many games use multiple threads, but they don’t do so very effectively.
The vast majority of games use Unreal or Unity, and those engines (as products) are optimized to make the developer experience easy - notably NOT to make the end product performant.
It is pretty common that there is one big thread that handles rendering, and another for most game logic. This is how Unreal does it ‘out of the box’. It also splits the physics calculations off into multiple threads semi-automatically, and the standard default setup will have render and game logic on separate threads.
Having a lot of moving characters around is taxing because all the animation states have to go through the main thread that is also doing pathfinding for all the characters and any AI scripts that are running… often you can’t completely separate these things since where a character wants to move may determine whether they walk/run/jump/fly/swim and those need different animations.
This often leads to the scenario where someone with an older 8+ core chip is wondering why the game is stuttering when ‘it is only using 10% of my cpu’ - because the render thread or game logic thread is stuffed and is pinning one core/thread at 100%.
Effective concurrency requires designing for it very early, and most games are built in iterative refinements with the scope and feature list constantly changing - not conducive to solving the big CS problem of splitting each frame’s calculations into independent chunks.
Unreal is trying to move animation off the main thread but it will take a while to become standard. You can do it today but it’s not the default.
It’s definitely a hard problem to solve as an “out of the box” solution.
The concept is used by pretty much all games now. It’s just that during the gilded days of Intel everbody and their mother hardcoded around a max of 8 threads. Now that core counts are significantly higher game devs opt for dynamic threading instead of fixed threading, which results in Intels imbalanced Core performance turning into more and more of a detriment. Doom Eternal for example uses up as many threads as you have available and uses them pretty evenly
Honestly, if we’re talking modern games I think games that don’t utilize multithreading to at least some degree would be a significantly shorter list.
all games use it to some extent, the ones that use/need it the most are online games where several players are on the same map typically.
battlefield and battlefield adjacent games for example have historically pelted the CPU. because they often have massive player counts.
If a game is loading on the game thread then someone messed up