Apple is making surprisingly small improvements, considering how people thought the M1 was like the second coming of Christ.
Looks like M1 to M3 is going to be roughly +20-30%. In the same time AMD is going from Ryzen 5000 to Ryzen 8000.
Even Intel got a significantly bigger improvement than that, going from the 10900K to the 14900K, and they had 1 trash generation and the 14900K is basically just a 13900K from 2022.
M1 kinda was the “second coming of Christ” in regards to many efficiency metrics in the notebook space.
It’s idle efficiency and power usage doing tasks like local video playback, or Microsoft teams completely set a new bar for the likes of intel and AMD.
Both of stem still haven’t matched the original M1 in those efficiency metrics.
They certainly caught up in performance metrics and perf/watt under load, but not in the “lowest power usage possible” metric.
Even Apples LPDDR memory PHY is more efficient than Intel or AMDs, because Apple is bigger than both of them and has THE best low-power engineers on the planet.
The CPU cores Apple makes are great, but they are quite large area wise and intel and AMD can compete pretty well at making a core.
Their SoCs are best in class however.
When you set the bar that high with M1, there isn’t really all that much room to improve in the SoC, and what’s left are the cores themselves where Apple is going to be innovating similarly to AMD and Intel.
M1 was Apples equivalent to AMDs Zen1.
A fresh start product where every low hanging fruit was implemented resulting in a massive gains over its predecessor.
Apple seems to be focused more on efficiency. They are stating up to 22 hours of battery life on the new MacBooks.
I think the reality is, since the M1 these machines are so fast that 99% of users don’t need any more performance. Until there is something new in computing that requires more performance, we really don’t need faster CPU’s.
Until there is something new in computing that requires more performance, we really don’t need faster CPU’s.
Yeah I think we’re at a point where there are so few edge cases where dumping more raw performance improves the workflow in any significant way.
Yes, you can upgrade your CPU to save 20 minutes out of your 3 hour render. What does that get you, really? You’re either doing this at the end of the day or overnight, it makes very little difference other than a very few edge cases where someone needs to push out as many of these renders in a day.
It’s time for the software and workflows to evolve. I’ve been a longtime windows user and am seriously considering moving over to MBP that I can use as a single computer which I would just plug into a dock when I get to the office and work on what is my current desktop setup. I just don’t want to hear the fans spinning and don’t need 3-400W constantly being pumped into my room. I’ll happily take a 5% performance hit in order to have 90% reduction in power usage, 100% reduction in noise and great portability without having to constantly sync my laptop and my desktop.
Not a Apple use case per se, but if you play economic strategy games (transport simulation, city builders, complex town builders) with heavy modding, even modern CPUs can start to buckle in the late game on large maps; this is mostly due to ST performance.
One (albeit niche) example is Cities in Motion 1 (2011). They have a custom engine that is single threaded. If you mod in the largest map size, free look and start building out a large network, you will see low FPS when running 1440 resolutions even on modern CPUs.
“People” were wrong. The M1 was decently competitive with, like, the Ryzen 5 5500U and 5600U (albeit the battery life was genuinely amazing), but it was nothing even remotely approaching a flagship chip, and at Apple pricing, it has to be to justify its existence.
Apple is making surprisingly small improvements, considering how people thought the M1 was like the second coming of Christ.
Looks like M1 to M3 is going to be roughly +20-30%. In the same time AMD is going from Ryzen 5000 to Ryzen 8000.
Even Intel got a significantly bigger improvement than that, going from the 10900K to the 14900K, and they had 1 trash generation and the 14900K is basically just a 13900K from 2022.
M1 kinda was the “second coming of Christ” in regards to many efficiency metrics in the notebook space.
It’s idle efficiency and power usage doing tasks like local video playback, or Microsoft teams completely set a new bar for the likes of intel and AMD. Both of stem still haven’t matched the original M1 in those efficiency metrics.
They certainly caught up in performance metrics and perf/watt under load, but not in the “lowest power usage possible” metric.
Even Apples LPDDR memory PHY is more efficient than Intel or AMDs, because Apple is bigger than both of them and has THE best low-power engineers on the planet.
The CPU cores Apple makes are great, but they are quite large area wise and intel and AMD can compete pretty well at making a core.
Their SoCs are best in class however. When you set the bar that high with M1, there isn’t really all that much room to improve in the SoC, and what’s left are the cores themselves where Apple is going to be innovating similarly to AMD and Intel.
M1 was Apples equivalent to AMDs Zen1. A fresh start product where every low hanging fruit was implemented resulting in a massive gains over its predecessor.
Apple seems to be focused more on efficiency. They are stating up to 22 hours of battery life on the new MacBooks.
I think the reality is, since the M1 these machines are so fast that 99% of users don’t need any more performance. Until there is something new in computing that requires more performance, we really don’t need faster CPU’s.
Yeah I think we’re at a point where there are so few edge cases where dumping more raw performance improves the workflow in any significant way.
Yes, you can upgrade your CPU to save 20 minutes out of your 3 hour render. What does that get you, really? You’re either doing this at the end of the day or overnight, it makes very little difference other than a very few edge cases where someone needs to push out as many of these renders in a day.
It’s time for the software and workflows to evolve. I’ve been a longtime windows user and am seriously considering moving over to MBP that I can use as a single computer which I would just plug into a dock when I get to the office and work on what is my current desktop setup. I just don’t want to hear the fans spinning and don’t need 3-400W constantly being pumped into my room. I’ll happily take a 5% performance hit in order to have 90% reduction in power usage, 100% reduction in noise and great portability without having to constantly sync my laptop and my desktop.
Not a Apple use case per se, but if you play economic strategy games (transport simulation, city builders, complex town builders) with heavy modding, even modern CPUs can start to buckle in the late game on large maps; this is mostly due to ST performance.
One (albeit niche) example is Cities in Motion 1 (2011). They have a custom engine that is single threaded. If you mod in the largest map size, free look and start building out a large network, you will see low FPS when running 1440 resolutions even on modern CPUs.
2014-2020 has Intel on 14nm.
Nov 2020 to Sep 2023 was 5nm.
Oct 2023 is 3nm.
M1 was 5nm. M3 is 3nm
Ryzen 5000 was 7nm. Ryzen 8000 is going to be 4nm (5nm family)
Intel did go from 14++++++ to “Intel 7”, so that was like 2 node jumps I think
“People” were wrong. The M1 was decently competitive with, like, the Ryzen 5 5500U and 5600U (albeit the battery life was genuinely amazing), but it was nothing even remotely approaching a flagship chip, and at Apple pricing, it has to be to justify its existence.