So is the 13 inch officially dead? Sad, 13 inch is the perfect size for a laptop, anything larger should be illegal.
Apple is making surprisingly small improvements, considering how people thought the M1 was like the second coming of Christ.
Looks like M1 to M3 is going to be roughly +20-30%. In the same time AMD is going from Ryzen 5000 to Ryzen 8000.
Even Intel got a significantly bigger improvement than that, going from the 10900K to the 14900K, and they had 1 trash generation and the 14900K is basically just a 13900K from 2022.M1 kinda was the “second coming of Christ” in regards to many efficiency metrics in the notebook space.
It’s idle efficiency and power usage doing tasks like local video playback, or Microsoft teams completely set a new bar for the likes of intel and AMD. Both of stem still haven’t matched the original M1 in those efficiency metrics.
They certainly caught up in performance metrics and perf/watt under load, but not in the “lowest power usage possible” metric.
Even Apples LPDDR memory PHY is more efficient than Intel or AMDs, because Apple is bigger than both of them and has THE best low-power engineers on the planet.
The CPU cores Apple makes are great, but they are quite large area wise and intel and AMD can compete pretty well at making a core.
Their SoCs are best in class however. When you set the bar that high with M1, there isn’t really all that much room to improve in the SoC, and what’s left are the cores themselves where Apple is going to be innovating similarly to AMD and Intel.
M1 was Apples equivalent to AMDs Zen1. A fresh start product where every low hanging fruit was implemented resulting in a massive gains over its predecessor.
Apple is making surprisingly small improvements, considering how people thought the M1 was like the second coming of Christ.
2014-2020 has Intel on 14nm.
Nov 2020 to Sep 2023 was 5nm.
Oct 2023 is 3nm.
M1 was 5nm. M3 is 3nm
Ryzen 5000 was 7nm. Ryzen 8000 is going to be 4nm (5nm family)
Intel did go from 14++++++ to “Intel 7”, so that was like 2 node jumps I think
“People” were wrong. The M1 was decently competitive with, like, the Ryzen 5 5500U and 5600U (albeit the battery life was genuinely amazing), but it was nothing even remotely approaching a flagship chip, and at Apple pricing, it has to be to justify its existence.
Apple seems to be focused more on efficiency. They are stating up to 22 hours of battery life on the new MacBooks.
I think the reality is, since the M1 these machines are so fast that 99% of users don’t need any more performance. Until there is something new in computing that requires more performance, we really don’t need faster CPU’s.
Until there is something new in computing that requires more performance, we really don’t need faster CPU’s.
Yeah I think we’re at a point where there are so few edge cases where dumping more raw performance improves the workflow in any significant way.
Yes, you can upgrade your CPU to save 20 minutes out of your 3 hour render. What does that get you, really? You’re either doing this at the end of the day or overnight, it makes very little difference other than a very few edge cases where someone needs to push out as many of these renders in a day.
It’s time for the software and workflows to evolve. I’ve been a longtime windows user and am seriously considering moving over to MBP that I can use as a single computer which I would just plug into a dock when I get to the office and work on what is my current desktop setup. I just don’t want to hear the fans spinning and don’t need 3-400W constantly being pumped into my room. I’ll happily take a 5% performance hit in order to have 90% reduction in power usage, 100% reduction in noise and great portability without having to constantly sync my laptop and my desktop.
Not a Apple use case per se, but if you play economic strategy games (transport simulation, city builders, complex town builders) with heavy modding, even modern CPUs can start to buckle in the late game on large maps; this is mostly due to ST performance.
One (albeit niche) example is Cities in Motion 1 (2011). They have a custom engine that is single threaded. If you mod in the largest map size, free look and start building out a large network, you will see low FPS when running 1440 resolutions even on modern CPUs.
Looks like a decent upgrade over the M2, pricing is bonkers though. The base M3 is over 2000€ and comes with 8gb of ram, simply going to 16gb and you’re looking at 2300€.
Seems cool. I hope whoever can afford things like this can tell us all about it!
I am a bit amazed at how often people comment on apple’s engineering without taking into account they pay for bleeding edge fab processes. The energy efficiency is amazing don’t get me wrong but some of the bump is moving to a new and smaller process.
It’s the combination of a lot of things, even on the software end.
True, I also see people comparing efficiency at peak performance numbers a lot more than they do at lower clocks, which is misleading for this kind of machines.
What makes Macs soooo much better for battery is their low consumption at low loadWhich makes the M2 Pro vs M3 Pro CPU comparison look substantially worse. If N3B was any good it should have had a nice uplift. It doesn’t appear to.
Comparisons:
M3 base:
-
CPU: 20% faster than M2 base
-
GPU: 20% faster than M2 base
M3 Pro:
-
CPU: Undisclosed performance vs M2 Pro, 20% faster than M1 Pro
-
GPU: 10% faster than M2 Pro
M3 Max
-
CPU: 50% faster than M2 Max
-
GPU: 20% faster than M2 Max
It seems the biggest improvements were on M3 Max. For the M3 family, all will enjoy an upgraded screen brightness (From 500 nits to 600), hardware accelerated raytracing and hardware accelerated mesh shading
didn’t they say there was a big leap in GPU ? 20% is tiny
I am surprised.
Is this data really accurate ?On another note…
-
Memory bandwidth is down. M2 Pro had 200GB/s, M3 Pro only has 150GB/s. M3 Max only has 400GB/s on the higher binned part.
-
Just like the low-spec M3 14" has one fewer Thunderbolt port, it also doesn’t officially support Thunderbolt 4 (like M1/M2 before it)
-
The M3 Pro loses the option for an 8TB SSD. Likely because it was a low volume part for that spec.
-
The M3 Pro actually has more E-cores than the Max (6 vs 4). Interesting to see them take this away on a higher-specced part; seems like Intel wouldn’t do this
Memory bandwidth is down. M2 Pro had 200GB/s, M3 Pro only has 150GB/s. M3 Max only has 400GB/s on the higher binned part.
This really puzzles me. One of the impressive things about the M2 Max and Ultra was how good they were at running local LLMs and other AI models (for a component not made by Nvidia and only costing a few grand). Mostly because of their high memory bandwidth, since that tends to be the limiting factor for LLMs over raw GPU TFLOPS. So for LLM use, this is *really* shooting themselves in the foot. Guess I better buy an M2 Ultra Mac Studio before they get around to downgrading it to the M3 Ultra.
That is not true. The SoC does not have enough Neural Engine Cores to run AI training on its own. For AI inference, it’s not IO centric.
What locally hosted models were people running?
-
I’m not incredibly knowledgeable when it comes to hardware, but a 50% CPU and 20% GPU increase does not seem insignificant for a product upgrade (M2 Max --> M3 Max) in less than a year.
Only highest end M3 Max(highest binned 12 p core + 4 ecore) while M2 Max has 8+4. But you have to pay lot more to get the M3 Max laptops. I would wait for independent benchmarks across applications to see actual improvements.
People would kill If intel increased their HEDT or i9 by 50%
It seems the biggest improvements were on M3 Max.
/u/uzzi38 any indication if the e-cores are less useless at all, with the renewed focus on gaming and not just pure background work?
what are the overall cache structure changes especially in the GPU etc? Enough to compensate for the reduction? Things like cache structure or delta compression etc can def make a difference, we have seen memory performance ratios soar since kepler etc. But it definitely seems more tiered than M1/M2.
Obviously this all exists in the shadow of the N3 trainwreck… N3B vs N3E etc and the like. Any overall picture of the core structure changes here?
It all just is so much less interesting than an ARM HEDT workstation would be right now
Apple’s E-cores were never useless, they’re easily best in class by a large margin. They’ve got the best perf/W in the industry by a country mile really, the things sip power, and while they’re not the fastest little cores, they are still extremely potent. I don’t see them changing away from that core structure any time soon.
As for the GPU, idk off the top of my head, but the IP is likely similar to A17. I wouldn’t expect much - the main advantage is the addition of hardware RT support, but from what we’ve seen the RT capabilities aren’t hugely improved over shaders. Definitely going to be a more modest improvement than prior iterations here.
-
I was thinking about to buy a New MacBook, Now I’m confused between
M3 Pro chip with 12‑core CPU, 18‑core GPU and 16‑core Neural Engine.
OR
M2 Max Chip with 12‑Core CPU and 38‑Core GPU
Any Suggestions ??
Found it interesting how the m3 pro starts with 18gb. That’s not a very common ram amount I’ve seen. Also, that move to 6p+6e was unexpected.
I find it disappointing 8GB is still the base RAM. Apple usually supports Macs for ~7 years, so are they really planning to support 8GB of RAM in 2030? 8GB was the base RAM included on the 2012 MacBook Pro, 11 years later and it’s the same. The MacBook Air has had 8GB standard since 2016.
12GB should be standard in 2023. There really is no excuse.
Lpddr5 comes in oddball capacities - 6,8,12GB, etc. and 3 memory chips on package for the pro
12 performence cores sound like waaay to much, except in productivity apps.
How does the M3 Max compare to something like a 4080 or 4090. I specifically work with CAD software and 3D Rendering???
Let’s put some science to it, shall we.
Using Digital Foundry’s vid as the main perf orientation source for ballpark estimates, it seems that in gaming applications depending on a game M1 Max is anywhere from 2.1 to staggering 4.5 times slower than desktop 3090 (350W GPU), with geomean sitting at embarrassing 2.76. In rendering Pro Apps, on the other hand, using Blender as an example, the difference is quite a bit smaller (even though still huge), 1.78.
From Apple’s event today it seems to be pretty clear that information on generic slides pertains to gaming performance, and on dedicated pro apps slides - to pro apps (with ray tracing). It appears that M3 Mac / M1 Max in gaming applications, therefore, is 1.5x, which would put M3 Max at 1.84x slower still than 3090. Looks like it will take M3 Ultra to beat 3090 in games.
In pro apps (rendering), however, M3 Max / M1 Max is declared having a staggering 2.5x advantage, moving M3 Max from being 1.78x slower than 3090 to being 1.4x faster than 3090 (desktop at 350W), or alternatively, 3090 being only 0.71x of M3 Max’s performance.
Translating all of this to 4000 series using TechPowerUp ballpark figures, it appears that in gaming applications M3 Max is going to be only very slightly faster than… a desktop 4060 (non-Ti; 115W). At the same time the very same M3 Max is going to be a bit faster than a desktop 4080 (320W GPU) in ray-tracing 3D rendering pro applications (like Redshift and Blender).
With an added detail that a desktop 4080 is a 16 GB VRAM GPU, with the largest consumer grade card - 4090 - having 24 GB of VRAM, while M3 Max can be configured with up to 128 GB of unified RAM even in a laptop enclosure, which will probably make about 100 GB or so available as VRAM, about 5x more than on Nvidia side, which, like the other commenter said, unjustly downvoted, makes a number of pro tasks comically impossible (do not run) on Nvidia very much possible on M3 Max.
So, anywhere from a desktop 4060 to a desktop 4080 depending on application, in games, 4060, in pro apps, “up to 4080” depending on the app (and a 4080 in at least some of the ray tracing 3d rendering applications).
Where does that put a CAD app I’ve no idea, probably something like 1/3 away from games and 2/3 aways from pro apps? Like 1.45x slower than a desktop 3090? Which puts it somewhere between a desktop 4060 Ti and a desktop 4070.
I’m sure you can find how to translate all of that from desktop nvidia cards used here to their laptop variants (which are very notably slower).
I have to highlight for the audience once again an absolutely massive difference in performance improvement between games and 3D rendering pro apps: M3 Max / M1 Max, as announced by Apple today, is 1.5x in games, but 2.5x in 3D rendering pro apps, where M1 Max already was noticeably slower in games than it presumably should have been given how it performed in 3D rendering apps, relative to Nvidia.
4080: 9558.15 M2 max: 1916.93
M3 max to get nearly 5x rendering vs m2 max
Okay
About pure CPU rendering, Mac vs. PeeCee, it is really simple;
AMD 7950x in 105W “ECO MODE” (!!!) gets exactly same result in Cinebech 24 (multi-core) benchmark
… than fastest, most expensive M2 Ultra Mac.
3000 euros for decent 7950 box, 128 GB, 4 TB etc.
Most expensive Mac M2 Ultra, same 128 GB 4 TB setup -> about 6900 euros.
Ryzen 7950 vs current M2 Max, Ryzen is 100% faster;
https://www.cpu-monkey.com/en/cpu_benchmark-cinebench_2024_multi_core
It won’t compare to high end current gen gpu like the 4090, but as someone who uses a Mac Studio M2 MAX at work and a rtx 2070 at home, my blender projects run comparably on both.
For 3d rendering it would be quite the ask to expect M3 max to beat a 4090 imo…
I mean, you could put a 4090 in, what, a 96 core threadripper? An epyc with 192 cores? ~16 ram channels?
no chance, 4080-90 is the size of the whole laptop lol
I just ordered the 14’ M3 Pro (11c, 14c), with 36gb ram. I feel like I might have made a mistake. Should I I return it and get the (12c,18c) version. It’s not that much more. My reasoning was that I wanted better battery life and figured the extra cores might reduce it, but now I’m not too sure.
Would having 4 less gpu cores not give me the ray tracing benefits which are a big upgrade this year?
Any advice on what I should do would be much appreciated!!
So do they beat the Snapdragon X Elite?
The M3 Max does. At about half(!?) the power consumption. And the M3 should beat the 14900K in single core.
Yeah they showed a graph saying half the power consumption comparing to an anonymous 12-core PC chip.
I doubt the said 12-core chip is the Snapdragon X Elite, bit we will have to see.
For sure. It was specified that the 12 core chip in question was the 1360p which is substantially less efficient than the Qualcomm chip. It’s likely M3 Max will be faster than the Snapdragon X elite but double the power consumption for worse performance is false.
Apple compared it against the $1,299 MSI Prestige 13Evo A13M-050US, mentioned at bottom of the slide.