Is the I9 9900K overkill for music production?

Configure and optimize you computer for Audio.
Post Reply New Topic
RELATED
PRODUCTS

Post

I just replaced my 3930K with an 9900K. I wanted the low latency and solid thunderbolt support (UAD hardware), and it delivered both.

Admittedly, I also use my PC for software development and games, so I wanted something that was all-round performance-wise, and I'm not interested in overclocking (been building my own PCs for near-on 30 years now, and the novelty of overclocking has worn off).

I'm very happy with the result.

Post

Koalaboy do you have any estimates of ASIO performance in direct comparison to the 9900K to the 3930K?... any specifics (performance related) as to buffer settings and audio card preference is appreciated. I use UAD-2 cards here but I am on RME RayDats as my main interface via ADAT convertors. 9900K is my likely next choice if the new Ryzen 12core/24 thread CPU has issues for low latency audio.

Post

I haven't done direct comparisons, as I also switched from an RME Babyface over USB to a UAD Apollo Twin over Thunderbolt. I've not noticed any issues with low latency though - the discussions on various forums suggested Intel was the safe bet for that anyway.

Post

OK thanks Koalaboy. I am hoping for some real world comparision between processors so I know what I can expect relative to my 3930K which really has been an excellent processor and remains competitive.

Post


Post


Yes this is the one I'm waiting for.... hoping that Scan Aaudio can confirm that the latency issues and those random dropouts that were reported when under load have been solved (earlier Ryzen and Thread Rippers) . The 16/32 core is the one I wan't but the 12/24 will do if it has solved the problems that hinder audio production. Otherwise back to Intel or wait until the AMD pressure forces their hand to drop the cost.

Post

The 12 core model is better since it has a faster default clock speed. The default higher clock speed on a single core is more important than more cores - especially if you are comparing 12 and 16 core options. For real-time audio single core performance is most important. Your DAW and its real time audio bus is using one core if I am not mistaken and this will be what drives overall system performance.
Little Black Dog - 2008-Present

Post

Daws can use multiple cores but the amount of plugins /processing you can do on one track is determined by the clock speed of each core.
http://www.voltagedisciple.com
Patches for PHASEPLANT ACE,PREDATOR, SYNPLANT, SUB BOOM BASS2,PUNCH , PUNCH BD
AALTO,CIRCLE,BLADE and V-Haus Card For Tiptop Audio ONE Module
https://soundcloud.com/somerville-1i

Post

yeah- it's serial, a stream of audio.
in a multi-track project, each plug/process on a track is processed in series(sequentially)-
which requires the entire stream/series to be on one core I understand, as it can not switch the buffer to another one without an underrun
while each track can be on a separate core

This means you can gave a more complicated most complicated track on a faster processor

Post

Some plugins use multiple cores - eg U-He Diva. So streams are not necessarily running on a single core.

Post

this is the conventional understanding I have heard more than once on the internet
-d16 Lush also uses more than one core

if trying to take this concept into account, I guess u could do the MIDI one core, the graphics on another-
and use the other core for the crucial audio

Urs?- can u shed more light on this- it's what people are saying now
Urs wrote: Tue Dec 04, 2018 2:57 pm

Post

I would also love to see feedback on this from a more experienced developer, as my understanding of channel processing is pretty much in-line with your's Nix.

Post

@nix - I think that kind of splitting would result in a pretty trivial reduction in CPU load since graphics and midi would be minimal loads on a static synth patch. I think I have muddied the water a little: I suspect you are correct about a typical DAW track where you have a string of plugins on the same stereo stream of audio. What a synth does to generate a stream of audio may lend itself to some sort of coordinated load sharing between cores. Kontakt also runs on multiple cores if you enable it IIRC. U-He developers would be among the best to clarify this question as they actually achieved it.

Conceivable approaches for sharing a task over cores might include time slicing - so core1 does the first half of each buffers worth of audio and core2 does the second half and then the results are streamed in the correct sequence. If you imagine the block diagram of the signal path inside DIVA - is it feasible for one CPU intensive task (eg generating the modulated waveform) could run on one core and another (eg the filter and FX) could run on a second? The first approach (time slicing) - if feasible - would work with any processor of audio (eg reverb).

If the second approach it isn't feasible it is likely because the latency incurred when shuffling audio streams between cores creates more problems than it solves.

Post

hmm cool, maybe we can find something out 8D

@egbert-to draw on another core could help some-
you'd be surprised how cpu intensive GDI+ graphics are,
the more capable devs are often using OpenGL, which uses your GPU

I like your inference about load sharing!

Post

The other obvious thing for load sharing in the synth case - split multiple voices between cores. One or more voices per core could allow more polyphony at the same quality. This probably makes more sense than splitting functions as per my speculation above.

Post Reply

Return to “Computer Setup and System Configuration”