In my video test I used RivaTuner overlay to show CPU usage. There is one core that's 80-100% usage when GPU is not fully utilized. I had HT disabled on my 6 core CPU. With more cores you have the more that it is apparent that one thread has very high CPU usage, since other threads are distributed on the extra cores.
A long time ago during I ran some tests against original benchmark with cores disabled and found that more than 4 cores hardly increase overall score and minimum frame-rate.
Clock speed however did increase the minimum frame-rate
It's most likely an API issue, DX11 will alleviate the problem to a certain amount.
This article simplifies how the APIs work:
http://www.littletinyfrogs.com/artic...oversimplified
To to summarize:
DirectX 11: Your CPU communicates to the GPU 1 core to 1 core at a time. It is still a big boost over DirectX 9 where only 1 dedicated thread was allowed to talk to the GPU but it’s still only scratching the surface.
DirectX 12: Every core can talk to the GPU at the same time and, depending on the driver, I could theoretically start taking control and talking to all those cores.
That’s basically the difference. Oversimplified to be sure but it’s why everyone is so excited about this.
Last edited by Pseudopsia; 05-29-2015 at 12:05 AM.
Ya... I see leaked benchmarks any where from 4-8% or 15% IPC gain from haswell. I'm a little mixed on continuing with -E platform because I haven't seen increase gaming performance switching between 4 to 6 cores, it's expensive to upgrade frequently and it's nearly a year behind latest architecture. DX12 games may take advantage of the extra cores. Depending on further benchmarks I see with DX12 and skylake if I'm not impressed I may wait it out till Skylake-E, which will probably be around 2nd half of 2016.
That is interesting indeed. My memory is a bit slower than yours (2 4GB sticks, DDR3-1600 CAS-11, in Dual-channel) and comes in about 1GB/slower than yours. In order to get north of 50FPS in MD, I have to clip object/characters displayed. Unfortunately it's a laptop that won't let me tweak the timings (RAM is capable of 1700 at CAS-11). Regardless though, it holds up well for an old i7 (3630-QM, 3.2GHz) and just a GTX670MX for the GPU.
Wonder if the guys able to lock 60FPS with higher quality settings are clocking in north of 26GB/s like my PC does. The PC manages MD well--but the FSB and MEM clocks are ramped up to the brink of failure too (why I don't game on that rig at all in the summer--too freaking hot).
I built a rig with the G3258. It ran the game pretty good overclocked to 4.2Ghz but the CPU will be constantly pegged at 99% which makes the FPS bounce around quite a bit which makes some areas pretty choppy.
I'd recommend getting at least an i5 even if it means spending less on a GPU. Can always turn down settings with a cheaper GPU, but that little Dual Core can only work so hard. I doubt it will make much difference with DX11 either since both cores are already maxed on DX9.
Edited: unless someone can chime in on how an i3 would fare as I've never tried one on this game, as they're pretty good value as well.
----------------------------------------------------------------
I did try Overclocking my ram to 2000Mhz from 1600Mhz which in turn gave me about a 5 FPS difference when looking across the market, but it also made my CPU hit 90% usage in those instances (compared to 74%ish)
So for those steady 60 FPS I guess it would take the advantage of the i7 hyper-threading + fast ram.
----------------------------------------------------------------
4670k @ 4.0Ghz -> Memory 2000Mhz 9-11-11-27
> Running: System memory performance assessment ''
> Run Time 00:00:05.13
> Memory Performance 26033.07 MB/s
> Total Run Time 00:00:06.19
4670k @ Stock -> Memory 1600Mhz 8-9-9-21
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: System memory performance assessment ''
> Run Time 00:00:05.13
> Memory Performance 22455.55 MB/s
> Total Run Time 00:00:06.41
Last edited by Judge_Xero; 05-29-2015 at 07:26 AM.
"I don't always drink beer, but when I do, it's often."
Temp Forum Ban - July 7th 2016 *** I promise to never call out scrub players again due to it causing a toxic community
Does not need to support SLI. As long as your mobo have at least a 2nd PCI-E 3.0 slot that supports at least x8 speed then you can plug there and try.
The thing is -E series chips aren't primarily for gaming. In fact the i5-4690K is very sufficient. Most games are on single core performance rather than multi-core performance and thus -E chips won't be able to beat Devil's Canyon in terms of single core speed. Haswell-E in generally can't be clocked as high as the i7-4790K.
As a matter of fact, my i7-4790K is a golden chip and was able to clock at 4.8GHz @ 1.24V stable. On the other hand, I didn't get lucky at all with my i7-5820K (which I use for content creation purposes) which I have to settle for 4.2GHz @ 1.24V too.
Last edited by Ooshima; 05-29-2015 at 10:33 AM.
From my research, before exchanging my mobo, only SLI boards do have 2nd PCI-E x16 slot connected with 8 lanes.
On boards without SLI (also that ones supporting Crossfire) do only have 4 lanes at the second slot.
There is a way to modify the driver to activate SLI. But the 4 lanes connection of the second card is very noticeable.
Videos mit der Hauptgeschichte und ausgewählten Nebenquestreihen (deutsch): https://www.youtube.com/user/KSVideo100
Typically if you have 2 X PCIe full slots then either will run in 16x or 8X (whichever they are designed for, as most Intel Boards are moving to 8X PCIe 3.0 where as AMD Boards still have Native 16x with PCIe 3.0)
But what typically happens is this.
Slot 1 16x - No GPU
Slot 2 16x - No GPU
You can put a GPU in either slot and have it work in 16x PCIe
------------------------
Slot 1 8x - GPU Present
Slot 2 8x - GPU Present
With both slots populated it shifts both to 8x - where as individually they are capable of 16x
---------------------------------------
The only native 4x PCIe lanes are those little short ones that you would plug a wireless/SSD/Video Card/Network card into (which can also be Native PCIe 1x on budge motherboards)
The motherboard does not need to be Crossfire/SLI capable to have each of the individual slots to run in 8X or 16x, it just matters how many you have populated in how it divides the available communication lanes.
Last edited by Judge_Xero; 05-29-2015 at 09:52 PM.
"I don't always drink beer, but when I do, it's often."
Temp Forum Ban - July 7th 2016 *** I promise to never call out scrub players again due to it causing a toxic community
|
![]() |
![]() |
![]() |
|