Is the I9 9900K overkill for music production?
-
- KVRAF
- 2194 posts since 18 Mar, 2006 from Plymouth, UK
I just replaced my 3930K with an 9900K. I wanted the low latency and solid thunderbolt support (UAD hardware), and it delivered both.
Admittedly, I also use my PC for software development and games, so I wanted something that was all-round performance-wise, and I'm not interested in overclocking (been building my own PCs for near-on 30 years now, and the novelty of overclocking has worn off).
I'm very happy with the result.
Admittedly, I also use my PC for software development and games, so I wanted something that was all-round performance-wise, and I'm not interested in overclocking (been building my own PCs for near-on 30 years now, and the novelty of overclocking has worn off).
I'm very happy with the result.
-
- KVRAF
- 2932 posts since 23 Dec, 2002
Koalaboy do you have any estimates of ASIO performance in direct comparison to the 9900K to the 3930K?... any specifics (performance related) as to buffer settings and audio card preference is appreciated. I use UAD-2 cards here but I am on RME RayDats as my main interface via ADAT convertors. 9900K is my likely next choice if the new Ryzen 12core/24 thread CPU has issues for low latency audio.
-
- KVRAF
- 2194 posts since 18 Mar, 2006 from Plymouth, UK
I haven't done direct comparisons, as I also switched from an RME Babyface over USB to a UAD Apollo Twin over Thunderbolt. I've not noticed any issues with low latency though - the discussions on various forums suggested Intel was the safe bet for that anyway.
-
- KVRAF
- 2932 posts since 23 Dec, 2002
OK thanks Koalaboy. I am hoping for some real world comparision between processors so I know what I can expect relative to my 3930K which really has been an excellent processor and remains competitive.
-
- KVRAF
- 4205 posts since 21 Oct, 2001 from my bolthole in the south pacific
AMD Ryzen 9 3950 - 16 Core CPU:
https://www.tomshardware.com/news/amd-r ... 39615.html
https://www.theinquirer.net/inquirer/ne ... al-e3-2019
https://www.tomshardware.com/news/amd-r ... 39615.html
https://www.theinquirer.net/inquirer/ne ... al-e3-2019
-
- KVRAF
- 2932 posts since 23 Dec, 2002
egbert wrote: ↑Tue Jun 11, 2019 11:14 am AMD Ryzen 9 3950 - 16 Core CPU:
https://www.tomshardware.com/news/amd-r ... 39615.html
https://www.theinquirer.net/inquirer/ne ... al-e3-2019
Yes this is the one I'm waiting for.... hoping that Scan Aaudio can confirm that the latency issues and those random dropouts that were reported when under load have been solved (earlier Ryzen and Thread Rippers) . The 16/32 core is the one I wan't but the 12/24 will do if it has solved the problems that hinder audio production. Otherwise back to Intel or wait until the AMD pressure forces their hand to drop the cost.
-
- KVRian
- 1360 posts since 4 Aug, 2004 from Ain't tellin' ya...
The 12 core model is better since it has a faster default clock speed. The default higher clock speed on a single core is more important than more cores - especially if you are comparing 12 and 16 core options. For real-time audio single core performance is most important. Your DAW and its real time audio bus is using one core if I am not mistaken and this will be what drives overall system performance.
Little Black Dog - 2008-Present
- KVRAF
- 2147 posts since 30 Oct, 2006 from Australia, NSW
Daws can use multiple cores but the amount of plugins /processing you can do on one track is determined by the clock speed of each core.
http://www.voltagedisciple.com
Patches for PHASEPLANT ACE,PREDATOR, SYNPLANT, SUB BOOM BASS2,PUNCH , PUNCH BD
AALTO,CIRCLE,BLADE and V-Haus Card For Tiptop Audio ONE Module
https://soundcloud.com/somerville-1i
Patches for PHASEPLANT ACE,PREDATOR, SYNPLANT, SUB BOOM BASS2,PUNCH , PUNCH BD
AALTO,CIRCLE,BLADE and V-Haus Card For Tiptop Audio ONE Module
https://soundcloud.com/somerville-1i
- KVRAF
- 5131 posts since 22 Jul, 2006 from Tasmania, Australia
yeah- it's serial, a stream of audio.
in a multi-track project, each plug/process on a track is processed in series(sequentially)-
which requires the entire stream/series to be on one core I understand, as it can not switch the buffer to another one without an underrun
while each track can be on a separate core
This means you can gave a more complicated most complicated track on a faster processor
in a multi-track project, each plug/process on a track is processed in series(sequentially)-
which requires the entire stream/series to be on one core I understand, as it can not switch the buffer to another one without an underrun
while each track can be on a separate core
This means you can gave a more complicated most complicated track on a faster processor
- KVRAF
- 5131 posts since 22 Jul, 2006 from Tasmania, Australia
this is the conventional understanding I have heard more than once on the internet
-d16 Lush also uses more than one core
if trying to take this concept into account, I guess u could do the MIDI one core, the graphics on another-
and use the other core for the crucial audio
Urs?- can u shed more light on this- it's what people are saying now
-d16 Lush also uses more than one core
if trying to take this concept into account, I guess u could do the MIDI one core, the graphics on another-
and use the other core for the crucial audio
Urs?- can u shed more light on this- it's what people are saying now
-
- KVRAF
- 1929 posts since 4 Nov, 2004 from Manchester
I would also love to see feedback on this from a more experienced developer, as my understanding of channel processing is pretty much in-line with your's Nix.
-
- KVRAF
- 4205 posts since 21 Oct, 2001 from my bolthole in the south pacific
@nix - I think that kind of splitting would result in a pretty trivial reduction in CPU load since graphics and midi would be minimal loads on a static synth patch. I think I have muddied the water a little: I suspect you are correct about a typical DAW track where you have a string of plugins on the same stereo stream of audio. What a synth does to generate a stream of audio may lend itself to some sort of coordinated load sharing between cores. Kontakt also runs on multiple cores if you enable it IIRC. U-He developers would be among the best to clarify this question as they actually achieved it.
Conceivable approaches for sharing a task over cores might include time slicing - so core1 does the first half of each buffers worth of audio and core2 does the second half and then the results are streamed in the correct sequence. If you imagine the block diagram of the signal path inside DIVA - is it feasible for one CPU intensive task (eg generating the modulated waveform) could run on one core and another (eg the filter and FX) could run on a second? The first approach (time slicing) - if feasible - would work with any processor of audio (eg reverb).
If the second approach it isn't feasible it is likely because the latency incurred when shuffling audio streams between cores creates more problems than it solves.
Conceivable approaches for sharing a task over cores might include time slicing - so core1 does the first half of each buffers worth of audio and core2 does the second half and then the results are streamed in the correct sequence. If you imagine the block diagram of the signal path inside DIVA - is it feasible for one CPU intensive task (eg generating the modulated waveform) could run on one core and another (eg the filter and FX) could run on a second? The first approach (time slicing) - if feasible - would work with any processor of audio (eg reverb).
If the second approach it isn't feasible it is likely because the latency incurred when shuffling audio streams between cores creates more problems than it solves.
- KVRAF
- 5131 posts since 22 Jul, 2006 from Tasmania, Australia
hmm cool, maybe we can find something out 8D
@egbert-to draw on another core could help some-
you'd be surprised how cpu intensive GDI+ graphics are,
the more capable devs are often using OpenGL, which uses your GPU
I like your inference about load sharing!
@egbert-to draw on another core could help some-
you'd be surprised how cpu intensive GDI+ graphics are,
the more capable devs are often using OpenGL, which uses your GPU
I like your inference about load sharing!
-
- KVRAF
- 4205 posts since 21 Oct, 2001 from my bolthole in the south pacific
The other obvious thing for load sharing in the synth case - split multiple voices between cores. One or more voices per core could allow more polyphony at the same quality. This probably makes more sense than splitting functions as per my speculation above.