I mean you need to create a selling point
Joke’s on you, my intel 12100 doesn’t have any E-cores!
I also went for a 12400 over more expensive options at the time because not only was it good value, I also wasn’t interested in the experience of being an early adopter for mixing different cores on Wintel.
AVX-512 still disabled though?
Has anyone else seen these videos where people change the frequency (I believe*) of how often Windows has an interrupt request to check the power of the system to reduce overall system latency.
For whatever reason, Windows checks this every 15ms, but people are changing it to the maximum setting of 5,000ms, which reduces latency for the CPU considerably… apparently fiddling with this setting is particularly bad for AMD’s X3D chips.
What are the pros and cons to this? Has any reputable journalist looked into this?
It works. Set to 5000ms, which is the max value.
It’s garbage that end users need to do any tweaking at all.
A good number of tweaks are unproven and famously just bog down the system even more.
As a casual user myself, I wouldn’t even know if changing one setting, let alone dozens of settings, makes a difference. I’m not qualified to test, so on some of these “fixes” I just blindly follow the advice of the tutorial.
But disabling e cores, and changing the frequency 15ms->5000ms have helped me.
I also have prescribed to the LatencyMon optimizations. Like setting interrupt affinity masks for my gpu, ethernet, and usb host controller.
12600k owner. i’m so frustrated. big.Little has never delivered on the behavior they promised, and now i’m being locked out of the fix. forcing me over to windows11 was not a fix, it was just aggravation.
i early adopted the new arch because i really wanted to use an optane accelerator. intel quietly software locked 12th gen out of optane support, so when i built my system i spent an hour poring through the bios trying to figure out how to get it running and wondering why intel’s web instructions weren’t working for me.
overall it’s been a pretty bad experience, and one intel curated for me. based on my 12600k experience i’ll be very reluctant to adopt intel proprietary technologies in the future.
at least 12th and 13th gen doesnt have amdip
Ah, right, that’s why I wouldn’t have bought Intel. My fault for forgetting.
Isn’t this basically a thread scheduler fix that makes E cores do what they are actually supposed to do?
And they are reserving this fix for 14th gen only for, seemingly, no reason? With a good chance that they had this fix for a while, but management decided to reserve it for 14th gen?
This is what I’m reading from their reply to HUB.
Well, it does look like it’s just a scheduler fix at the very surface level. On the other hand it does seem to need some firmware support and presumably there is some reason why it only supports 2 games. So maybe it is something more complicated?
Gotta sell all those 14th gen CPUs somehow.
This feature is a lot like DLSS1; was cool to follow and for some of us early adopters to trial, but not something anyone should be basing any serious discussion/evaluation on
- from someone that updated every RTX gen specially for DLSS
The insanity that after 3 generations. Windows kernel still can’t priorotise P-E core usage in games and background desktops. parking them still gives them better results. AMD cache was kinda acceptable on 7950x3d vs 7800x3d debait because games cant utilise that much cores anyway.
And its that all that bios and mobo hoops you have to go through to be compatible for 2 titles.
Intel mostly abandoned ship on any gaming competitiveness. The clock speeds and high tgp is at least has its use in workloads
The quality and validity of your post can be evaluated by the use of the word “debait”
This could be Rudy Giuliany’s gamer account
LoL
For all we know English is their second language.
I am not English either and would not point that out if the comment was remotely making any sense at all ;)
A typo doesn’t invalidate what someone says.
almost like intel majorly fucked up the implementation of everything
Intel at this point abandoned ship on any gaming competitiveness.
These takes are insane. Like what kind of thermal paste are you eating?
Zen 4 vs 13th/14th gen, Intel wins in gaming. It’s only Zen 4 x3D that edges Intel out, and only by a few percent at 720p and 1080p. Saying they abandoned gaming competitiveness is not even remotely true.
https://www.reddit.com/r/hardware/comments/17ej64v/intel_raptor_lake_refresh_14th_core_gen_meta/
*By a few percent while sipping 100W less power.
Funny how this matters more in CPUs than GPUs
GPU’s are piss easy to cool, the giant bare dies make it so much easier to extract heat from when compared to CPUs which have relatively tiny dies with awful thermal paste transferring the heat to a stupid heat spreader, which finally makes it into the cooler.
GPUs have their own cooler to compenstate for pricing msrp. And AMD this generation offers much better pricing and performance (rander) with a 50 watts power delta.Intel just has worse performance and worse thermal performance. Also GPUs usually idle and throttle close to 80 degrees.while cpus get very hot spots.
And its that all that bios and mobo hoops you have to go through to be compatible for 2 titles.
Even for older CPUs w/o E cores we still have games that run considerably better on my 9900K when I turn HT off, which is kind of a fail.
I have no idea how it works but its probably moving everything away from p-cores that isn’t the game itself and keeps the game restricted to P-Cores.
Question is, why Windows doesnt have that option.
Instead of core affinity, just restricting cores to manually defined task, forbidding everything else.Because Microsoft, the biggest software company in history, cannot make good software.
Intel’s E cores doing what they are supposed to on 2 games and 2 years after their debut, and only on their newest cpu lineup, peak Intel engineering right here
I mean unless you’re Apple and have full top to bottom control of your hardware and software stack it takes some time for software to catch up with the hardware.
Took a while for games to use MMX, SSE, AVX. Stuff that uses AVX512 can probably be counted on one hand.
Good ray traced games are becoming mainstream just now, two whole generations after GeForce 20 series.
I do begrudge Intel for holding this back from 12th and 13th gen users though.
Took a while for games to use MMX
Even Intel’s 1st iteration of MMX was a kludge, as it used the floating point unit, so you could either use FP, or MMX, but not both simultaneously o.O
Took awhile for that to be separated and gain the benefits of both available together.
Intel also added 57 new instructions specifically designed to manipulate and process video, audio, and graphical data more efficiently.
These instructions are oriented to the highly parallel and often repetitive sequences often found in multimedia operations.
Highly parallel refers to the fact that the same processing is done on many different data points, such as when modifying a graphic image.
The main drawbacks to MMX were that it only worked on integer values and used the floating-point unit for processing, meaning that time was lost when a shift to floating-point operations was necessary.
These drawbacks were corrected in the additions to MMX from Intel and AMD.
https://www.informit.com/articles/article.aspx?p=130978&seqNum=7
More like peak Microsoft engineering, since this is something that was always supposed to be done by the operating system. Microsoft is so awful Intel had to do it themselves.
and they refuse to let people buy cpus without them, cant let amd win every bench mark that the vast majority of gamers will never use
It’s the exact opposite of what you’re saying.
Intel’s E-cores + Thread Director work perfectly fine 98% of the time, but there are edge cases where the Windows Scheduler cant get it right, even with the hints from Thread Director, and that’s where APO comes in, to manually force the correct scheduling.
Also lets not pretend that AMD isnt suffering scheduling issues themselves, the 7950x3D and 7900x3D are shunned because they have WORSE scheduling in games as they rely on the Windows Scheduler to just try and figure things out itself, and that doesnt usually work with 2 CCD’s with one having a higher frequency and the other more cache.
Importantly, you think the fix will come for 12/13 gen Intel? You seem to know what you are talking about.
Fine wine baby! Oh, wrong company.
which just shows the scheduler is wrong. which people who cared to put the effort in already did manually with lasso. the only missing piece is random main kernel threads jumping on to p cores. AMD scheduler isn’t perfect either. And both companies are going big/little. so plenty of room to keep improving.
correction: it shows that Intel Thread Director is wrong, and that the scheduler shouldn’t trust it.
Thread Director doesn’t do any directing, it’s a a set of new registers the OS scheduler is supposed to read for feedback on how well a thread is running on a core. If APO can do it right, it means the scheduler is wrong.
15.6 HARDWARE FEEDBACK INTERFACE AND INTEL® THREAD DIRECTOR
Intel processors that enumerate CPUID.06H.0H:EAX.HW_FEEDBACK[bit 19] as 1 support Hardware Feedback Interface (HFI). Hardware provides guidance to the Operating System (OS) scheduler to perform optimal workload scheduling through a hardware feedback interface structure in memory.
facepalm are you daft?
how the scheduler gets information from ITD doesn’t change what ITD does.
It’s not even engineering. It’s a software lock. Absolute bonkers.
Screw intel as an owner of a 13700k
From what I’m seeing, even with APO enabled, only 4 E-Cores are actually doing anything. The rest of the cluster is parked, doing absolutely nothing.
Actually, that’s false. They’re actually consuming power, how miniscule it may be!
And that’s one of the many reasons I don’t understand why Intel is stuffing so many E-Cores into their CPUs. Their practicality in real-world scenarios is mostly academic from the perspective of most users.
A quad-core or - at most - an octa-core cluster of E-Cores should be more than enough for handling ‘mundane’ background activity while the P-Cores are busy doing all the heavy-lifting.
Frankly, I just can’t help but feel like the purpose of these plethora of little cores it to artificially boost scores in multi-core synthetic benchmarks! After all, there are only a handful of ‘consumer-grade’ programs which are parallel enough to actually make use of a CPU with 32 threads.
Anyhow, fingers crossed for Intel’s mythical ‘Royal Core.’ A tile-based CPU architecture sans hyper-threading sounds pretty interesting… at least on paper.
More E cores aren’t for “mundane background tasks”. They’re to maximize MT performance in a given die space.
It’s why 8+16 14900K competes with 7950X in MT applications, but would clearly lose if it was the alternative 12+0.
Most people, myself included, would struggle to really utilize 32 threads. But the 7950X and 14900K exist for those that can or may be able to.
They’re to maximize MT performance in a given die space.
And I never said otherwise.
I explicitly mentioned that more E-Cores can boost scores in multi-threaded synthetic benchmark and - in turn - any parallel workload.
You think e cores are only for synthetics? What if I show you 6p+6e or 6p+8e can defeat 8p in real world applications?
Well, applications are definitely getting optimized for 8C/16T as of late so it won’t be all that surprising.
Hyper-threaded threads (hyper-threads?) can’t match an actual core by design, after all.
However, I’m merely question the addition of 8+ E-Cores in Intel’s high-end SKUs. I believe I explicitly mentioned that I can see the potential of integrating 4 to 8 E-Cores into a CPU.
It’s perfectly reasonable for high-end SKUs.
You either have single-threaded workloads or games that might use 6-8 threads at most. Or you have “embarrassingly parallel” workloads like rendering or all sorts of scientific computing that will use as many cores as you have.
If you literally only game on your PC then I guess just disable the e-cores.
What if I showed you Intel 12th 6p+6e was able to defeat amd’s 8p in real world applications 2 years ago?
A quad-core or - at most - an octa-core cluster of E-Cores should be more than enough for handling ‘mundane’ background activity while the P-Cores are busy doing all the heavy-lifting.
The 10900K was the last best designed intel CPU. Just straight up 10 powerful cores. That’s how a CPU should be.
ah yes who could forget the absolute TRIUMPH of the same tired architecture recycled for the 4th time in a row, on the same tired process recycled for the 5th time in a row.
Yeah just don’t run Minecraft bro
“I asked them is there a technical reason for why 12th and 13thgen Parts aren’t supported and if not will they be included in the future? their response to that question was as follows: Intel has no plans to support prior generations of products with application optimization. That’s a really garbage response to be perfectly blunt about it.”
Yeah, let’s have people rush to upgrade to 14th gen when it already had questionable value to upgrade. This APO feature will die in obscurity since Intel will realize 14th gen is not being adopted and unless they want a repeat of XeSS, they will cut their losses and decide not to invest resources into a feature that barely anyone uses.
unless they want a repeat of XeSS, they will cut their losses and decide not to invest resources into a feature that barely anyone uses.
XeSS is in close to 100 games now, more users are using XeSS than people even own Arc GPUs, as it has better quality than FSR and works on AMD and Nvidia GPUs too. Also Intel has already marketed Meteor Lake + XeSS, which they are expecting around 100 million people to buy MTL in 2024.
If anything XeSS has been the most successful part of Intels consumer GPU push.
as it has better quality than FSR
*depending on the game. spiderman and hogwarts legacy for instance have much worse ghosting with xess than fsr. so its kinda useless for those.
Not being adopted? Dell, HP, Lenovo, will slowly stop selling 13th gen and move on to 14th gen, like they do every year. Businesses will buy the computers with the biggest number gen, as they do. Gamers on reddit aren’t the huge market you may think it is for these companies.
Incredible people don’t realize the gamer isn’t the average consumer, innit?
Not if the game library keeps increasing and APO is supported on all future Intel CPUs.
It really seems to be like a software optimization to better leverage E cores in gaming to improve performance. I don’t see how that feature is going to die as Intel seems to be committed to hybrid for the foreseeable future.
I can’t get this to work on an ASUS Z790-E board. I tried both the ASUS DTT drivers and someone suggested trying the ASROCK DTT drivers. The ASROCK ones installed just fine but the apo app still says failed to connect
The most interesting thing is that APO dropped the power from 190W to 160W while increasing the performance.
You can now aslo get 803 fps instead of 734 in Rainbow Six, looks like a big win!
Looks like we’re gonna need Arrow Lake for that 1khz gaming experience.
800hz monitor sold!
I mean, my 12700K can’t deliver a consistent +360 FPS outside of the in-game benchmark, so any boosts are nice. The 7800X3D still looks more appealing as an upgrade for me, though.
You’re probably fine with the 12700k for a bit longer.
Might be worth jumping onto X3D version of Zen 5 though, but that’s likely 6-12 months out.