Home » The PARIS Forums » PARIS: Main » Neil's Dilemma (was: looking for De-esser plugin)
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77273 is a reply to message #77260] |
Wed, 20 December 2006 17:13 |
LaMont
Messages: 828 Registered: October 2005
|
Senior Member |
|
|
See Dedric, everthing is this life is not expalinable. Although we would like
it to be 2+2 =4, the rality is that sometimes 2+2=4.40..Why, becuase the
the math is flawed. Why is the math flawed? Becuase we as humans are flawed.
Say what you will about the metric system, which is a great tool.But, sometimes
working in inches and 16ths, 3/4s works better.
When a guy like Roger Nichols bangs his preverbial head around this issue
as to why his mix sound different being rendered from different and sometimes
the same cd mastering devices is expalinable, however the explanation does
not jive with the science.
Are we to believe that the Science we have today about digital audio is
the the Last word?? No.. In the future, some new science will come along
and either rebuff our current science or enhance it.
We I and other say.. We drop a stereo wav file in a given daw)(unity gain)
using the same audio converter...We can hear the diference. And it's sonically
obvious..
Lynns test is flawed because of the Roger Nicohls CD mastering problem. Things
change when going to render to CD.
Hey some peole on this earth can hear and see better than others..That's
just a fact
"Dedric Terry" <dedric@echomg.com> wrote:
>Of course Paris sounds different on Lynn's sampler, that was audible, and
>there are technical reasons why Paris will always sound different, but I
>didn't like it better on the sampler CD, to be honest, though the
>differences were subtle. Also, we weren't talking about Acid vs. Sonar
>specifically. I don't even bother with Acid as a DAW example - it's a loop
>app. Vegas is a video app that has had life as an audio app to some degree,
>but iMovie does audio as well, yet that doesn't really put it in the same
>category as professional DAW apps like Nuendo, PTHD, Sequoia, etc. I use
>Vegas for video, but not audio.
>
>On Lynn's sampler, Samplitude, Nuendo, Fairlight and the other natives don't
>sound different and aren't different in the unity gain examples
>(even the PTHD mix cancels with these). If you hear two files sounding
>differently that cancel to complete null, an audio difference isn't what
you
>are hearing. When there are differences in non-unity gain mix summing
>tests, you have an extra variable to account for - how is the gain
>calculated? Gain
>is non-linear (power), not adding two numbers together. So how is pan law
>factored in, and where? Are your faders exactly the same, or 0.001dB
>variant?
>
>Also if you drop the same stereo file in two different pro audio apps and
>hear a difference, one of the two apps is defective. There is nothing
>happening with a stereo file playback when no gain change or plugins are
>active - just audio streaming to the driver from disk. If you hear a
>difference there, I would be quickly trying to find out why. Something
is
>wrong.
>
>The point I am making is that these arguments usually come up as blanket
>statements with no qualification of what exactly sounds
>different, why it might, or solid well reasoned attempts to find out why,
or
>if there could be a real difference, or just a perceived one.
>
>Usually the "use your ears" comment comes up when there is no technical
>rebuttal for when the science and good
>ears agree. Of course "use your ears" first from a creative perspective,
>but if you are making a technical, scientific statement, then such comments
>aren't a good foundation to work from. It's a great motto, but a bit of
a
>cop out in a technical discussion.
>
>Regards,
>Dedric
>
>"LaMont" <jjdpro@ameriech.net> wrote in message news:45897f73$1@linux...
>>
>> Hey Dedric and Neil,
>>
>> I reason I think that the Summing CD test(good intentions) were lame was
>> because.. If a person can;t hear the difference btw a stereo wav file
>> that's
>> in Acid vs Sonar really needs a hearing test.
>>
>> For reason of my music work, I have to work with different DAWs, so I'm
>> very
>> familiar with their sound qualities. My circle of producers and engineers
>> talk about the daw sonics all the time. It's really no big deal anymore..
>>
>> The same logic applies when Roger Nichols (a few) years back in his
>> article
>> about master CD's and that he found out that 4 differnt CD burners yeilded
>> differnt sonic results. Sure, he sated that Math is the Math :) but, his
>> and the masering engineers Ears told them soemthing was different.
>> Hummm???
>>
>> Now, back to DAW sonics. I can hear the difference btw Paris and Nuendo
vs
>> Pro Tools, Logic audio.. There is no math to this, this is an ear
>> thing..You
>> either hear or you don't.. Simple.
>> But, good ears can hear it. .
>>
>> I really think the problem is, noone want to no that their money that
>> they've
>> spent on a given DAW, has sonic limitations or shall we say, just
>> different..
>>
>> I like that they all sound different. It's good to have choice when mixing
>> a song. Some DAWs, depending on the genre will yield better or the desired
>> results and than another.
>> EX. I would not mix a Acoustic jazz record today with Paris..reason, I'm
>> going for clarity at it's highest level.. For that project, It's either
>> Neundo
>> or Pro Tools and may Samplitude..Why should I fight with Paris's thick,
>> gooy
>> sonics, when I'm going for clarity. Well, Pro Tools and Nuendo/SX has
that
>> sound right out the gate.. Which makes my job a lot easier. simple. This
>> is not tosay that I could not get the job done in Paris..i could..But,
for
>> that Acoutic Jazz project , the other 2 daws gives me what I'm looking
>> for
>> without even touching an eq..
>>
>> This is not all about math. As BrianT states: Use you ears..Forget the
>> math..What
>> does knowing the math do for you anyway? Nothing, it just proves that
you
>> know the math. Does not tell you diddly about the sonics.. Just ask Roger
>> Nichols..
>>
>>
>> "Dedric Terry" <d@nospam.net> wrote:
>>>
>>>I know we disagree here Lamont and that's totally cool, so I won't take
>> this
>>>beyond this one response, and this isn't really directed to you, but my
>> general
>>>thoughts on the matter.
>>>
>>>In Neil's "defense" (not that he needs it), I and others have done this
>> comparison
>>>to death and the conclusion I've come to is that people are 80% influenced
>>>by a change in environment (e.g. software interface) and 20% ears. Sorry
>>>to say it, but the difference in sound between floating point DAWs is
far
>>>from real. It's just good, albeit unintentional marketing created by
>>>users
>>>and capitolized by manufacturers. Perceiving a "sound" in DAWs that in
>> actuality
>>>process data identically, is a bad reason to pick a DAW, but of course
>>>there
>>>is nothing wrong with thinking you hear a difference as long as it doesn't
>>>become an unwritten law of engineering at large. Preferring to work with
>>>one or the other, and "feeling" better about it for whatever reason is
a
>>>great reason to pick one DAW over another.
>>>
>>>There was a recent thread that Nuendo handled gain through groups
>>>differently,
>>>so I put Nuendo, Sonar 6 (both 32 and 64-bit engines) and Sequoia 8.3
to
>>>the test - identical tests, setup to the 1/100th of a dB identically and
>>>came up with absolutely no difference, either audible or scientific.
To
>>>be honest, this was the one test where I could have said, yes there is
an
>>>understandable difference between DAWs in a simple math function, and
the
>>>only one in the DAW that actually might make sense, yet even that did
not
>>>exist. The reason - math is math. You can paint it red, blue, silver
or
>>>dull grey, but it's still the same math unless the programmer was high
or
>>>completely incompetent when they wrote the code.
>>>
>>>I thought it was entirely possible the original poster had found something
>>>different in Nuendo, but when it came down to really understanding and
>>>reproducing
>>>what happens in DAW summing and gain structures accurately between each
>> DAW,
>>>there was none, nada, nil. The assertion was completely squashed. This
>>>also
>>>showed me how easy it is for a wide range of professionals to misinterpret
>>>digital audio - whether hearing things, or just setting up a test with
a
>>>single missed variable that completely invalidates the whole process.
>>>
>>>If you hear a difference, great. I've thought I heard a difference doing
>>>similar comparisons, then changed my perspective (nothing else - not
>>>converters,
>>>nothing - just reset my expectations, and switched back and forth) and
>>>could
>>>hear no difference.
>>>
>>>Just leave some room for other opinions when you post yours on this
>>>subject
>>>since it is very obvious that hearing is not as universally objective
and
>>>identically referenced as everyone might like to believe, and is highly
>> visually
>>>and environmentally affected. Some will hear differences in DAWs. There
>>>are Cubase SX 3 users claiming Cubase 4 sounds different. Sigh. Then
>>>they
>>>realize they aren't even using the same project... or at least different
>>>EQs, or etc, etc....
>>>
>>>Say what you want about published summing tests, but Lynn's tests are
as
>>>accurate as it gets, and that bears out in the results (all floating point
>>>DAWs cancel and sound identical - if you are hearing a difference, you
are
>>>hearing things that aren't there, or you forgot to align their gain and
>> placement).
>>> I've worked with Lynn at least briefly enough to know his attention to
>> detail.
>>> In the same way people will disagree about PCs and Macs until neither
>>> exists,
>>>so will audio engineers disagree about DAWs. This is one debate that
will
>>>always exist as long as we have different ears, eyes, brains,... and
>>>opinions.
>>>
>>>
>>>What Neil has done is to prove that opinions are always going to differ
>> (i.e.
>>>no consensus on the "best" mix of the ones posted). And in truth everyone
>>>has a different perception of sound in general - not everyone wants to
>>>hear
>>>things the same way, so we judge "best" from very different perspectives.
>>> There is no single gold standard. There are variations and mutated
>>> combinations,
>>>but all are subjective. That in and of itself implies very distinctly
>>>that
>>>people can and will even perceive the exact same sound differently if
>>>presented
>>>with any variable that changes the brain's interpretation, even if just
>> a
>>>visual distraction. Just change the lights in the room and see if you
>>>perceive
>>>a song differently played back exactly the same way. Or have a cat run
>> across
>>>a desk while listening. Whether you care to admit it or not, it is there,
>>>and that is actually the beauty of how our sense interact to create
>>>perception.
>>> That may be our undoing with DAW comparison tests, but it's also what
>>> keeps
>>>music fresh and creative, when we allow it to.
>>>
>>>So my suggestion is to use what makes you most creative, even if it's
just
>>>a "feeling" working with that DAW gives you - be it the workflow, the
GUI,
>>>or even the name brand reputation. But, as we all know, if you can't
make
>>>most any material sound good on whatever DAW you choose, the DAW isn't
the
>>>problem.
>>>
>>>Regards,
>>>Dedric
>>>
>>>"Neil" <IUOIU@OIU.com> wrote:
>>>>
>>>>That's interesting - all those DAW sonic interpretations, I
>>>>mean... I haven't had a chance to usee all of those, so it's
>>>>good information.
>>>>
>>>>I still don't understand why you consider my summing
>>>>comparisons "lame", however - it was a fair set of tests;
>>>>the same mix summed in different ways. Not trying to prove a
>>>>point or to rig it so one sounded any better than the other - in
>>>>fact, if you recall the thread, different people liked different
>>>>summed versions for different reasons... there wasn't any one
>>>>version that stood out as being "the one" that everyone felt
>>>>sounded better. The only reason I didn't come right out & say
>>>>right away which version was which is so that I didn't bias
>>>>anyone's opinion beforehand by mentioning that... NOT to try
>>>>& "hide" anything or "trick" anyone, as you accused me of
>>>>
>>>>Sheesh!
>>>>
>>>>Neil
>>>>
>>>>
>>>>"Lamont" <jjdpro@ameritech.net> wrote:
>>>>>
>>>>>Hey Neil,
>>>>>
>>>>>All I'm saying is: All DAW software have their own unique sound.
>>>>>Despite
>>>>>what those lame summing test shows..
>>>>>
>>>>>PT-HD has a very distinct sound. A very polished sound, with a nice
top
>>>>end,
>>>>>but with full audio spectrum represented. Mixer/Summing buss can be
>>>>>pushed,
>>>>>but you have to watch it.
>>>>>
>>>>>Nuendo/SX: Has a very Clear, 2 dimension sound, that does not hype the
>>>top
>>>>>nor bottom end.
>>>>>
>>>>>Logic Audio: Very Broad- Aggressive sound, that really works for Rock
>> and
>>>>>R & B/Gospel mixes.
>>>>>
>>>>>Digital Performer: With their hardware, superb audio quality. Full
>>>>>bodied
>>>>>sound .
>>>>>
>>>>>Sonar: Very flat sounding. I would say that Sonar is your most vanilla
>>>sound
>>>>>DW on the market..
>>>>>
>>>>>Samplitude : A little less top end than Pro Tools. Full bodied 3d
>>>>>sound..
>>>>>
>>>>>Paris: Dark sounding in comparison to the the other DAWs. But, has a
3d
>>>>sound
>>>>>quality that's full bodied.
>>>>>
>>>>>I feel that you asking SX to be something it's not with some analog
>>>>>summing.
>>>>>Especialy for your genre of music..
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>"Neil" <IUOIU@OIU.com> wrote:
>>>>>>
>>>>>>"Lamont" <jjdpro@ameritech.net> wrote:
>>>>>>>
>>>>>>>"I'd disagree with you in this instance because I happen to think
the
>>>>Cubase
>>>>>>>ones DO sound better."
>>>>>>>
>>>>>>>Then that SSL Engineer does not know what they are doing with board.
>>>There's
>>>>>>>no way a mix coming off of that board SSL should sound better than
a
>>>ITB
>>>>>>>Cubase SX mix..
>>>>>>>
>>>>>>>Sorry, that just does not jive. That engineer does not know how to
>>>>>>>push
>>>>>>he
>>>>>>>SSL or just not familiar with it.
>>>>>>
>>>>>>You're not really paying attention, are you? It was the same
>>>>>>engineer (me). And as far as whether or not I know how to use
>>>>>>that particular board, I guess that would be a matter of
>>>>>>your opinion. I don't think the SSL mixes are bad ones, I think
>>>>>>they came out good; I just think that you can hear more detail
>>>>>>in the ITB mixes in the examples I gave, and they have more
>>>>>>wideband frequency content from top to bottom.
>>>>>>
>>>>>>Anyway, my point of that particular comparison wasn't to say
>>>>>>"ITB mixes are better than using a large-format console that
>>>>>>costs somewhere in the six-figure range", the point of it was to
>>>>>>address a signal-chain suggestion that Paul had... he had
>>>>>>suggested perhaps that I needed to pick up a few pieces of
>>>>>>killer vintage gear, and I was just demonstrating that I think
>>>>>>the various signal chain components that I have here are on par
>>>>>>with most anything that can be found in heavy-hitter studios...
>>>>>>we used probably around $100k's worth of mics & pre's on the
>>>>>>PTHD/SSL mixes, plus obviously you're looking at another
>>>>>>roughly $100k for that particular console (40-channel E-series,
>>>>>>black EQ's, w/G-series Computer & Total Recall package), add in
>>>>>>the PTHD, outboard gear & whatnot, and you end up with
>>>>>>somewhere around a quarter-mil's worth of equipment involved in
>>>>>>that project. The project done at my place was done with my
>>>>>>gear, which certainly doesn't tally up to anywhere remotely
>>>>>>close to that cost & none of it bears a "vintage" stamp, but it
>>>>>>sounds competitive with the project that used all the heavy-
>>>>>>hitter stuff.
>>>>>>
>>>>>>Neil
>>>>>
>>>>
>>>
>>
>
>
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77274 is a reply to message #77265] |
Wed, 20 December 2006 17:14 |
LaMont
Messages: 828 Registered: October 2005
|
Senior Member |
|
|
Lol!!! :)
"TCB" <nobody@ishere.com> wrote:
>
>Well, hey, at least we can agree that Live sounds good.
>
>I'm really psyched for my Live/Scope setup. The Core Duo desktop is set
up
>and running nicely (what a change from a three year old Athlon) so I'll
have
>gobs of native f/x and instruments, UAD-1 plugs, and Scope synths/effects/mixing.
>Then I will officially be 100% at fault if I suck ;-)
>
>TCB
>
>"LaMont" <jjdpro@gmail.com> wrote:
>>
>>Agreed..
>>"TCB" <nobody@ishere.com> wrote:
>>>
>>>That's too bad. I think people have an instinctive thing against the sound
>>>of Live as well, just because it also loops like ACID does. Live sounds
>>like
>>>a properly written native DAW when working with non time stretched tracks.
>>>The sound quality on the stretched audio is amazing, all things considered,
>>>but the non-stretched sound is indistinguishable from SX. Too bad the
only
>>>really truly awful sounding app has to bring down a perfectly nice sounding
>>>one.
>>>
>>>TCB
>>>
>>>"DJ" <nowayjose@dude.net> wrote:
>>>>>with the exception of the last version of ACID I used (way back, I think
>>>v.
>>>>>3) which did sound truly awful.<
>>>>Don't hold your breath hoping ACID will sound any better. I DL'ed V6.0
>>>and
>>>>it sounds pretty awful too. I'm going to run it using Rewire in cubase
>>SX
>>>3
>>>>and lightpipe it to Paris and see if it's the actual ACID audio engine
>>or
>>>
>>>>the summing. I've got a feeling it's the audio engine..just a
>>>>feeling......because I've also got Vegas here and it sucks too.
>>>>
>>>>;o)
>>>>
>>>>"TCB" <nobody@ishere.com> wrote in message news:45894947$1@linux...
>>>>>
>>>>> I'm not convinced I can hear any difference between native systems,
>with
>>>
>>>>> the
>>>>> exception of the last version of ACID I used (way back, I think v.
3)
>>>
>>>>> which
>>>>> did sound truly awful. The real test on that one for me was the DAWSUM
>>>CD
>>>>> (which I purchased and dutifully scored because I was convinced 'summing'
>>>>> was the real reason PARIS sounded so good) wherein I discovered that
>>I
>>>
>>>>> could
>>>>> barely tell one mix from the next even when hearing vastly different
>>
>>>>> systems.
>>>>> Since then I am a skeptic, as opposed to a disbeliever, when I hear
>that
>>>>> one piece of software sounds greatly better, or even different, than
>>
>>>>> another.
>>>>> I'm not saying some people can't tell some pieces of software from
other
>>>>> pieces of software, I'm just saying I'm skeptical one system is 'bright'
>>>>> or 'sharp' or anything else until someone can produce statistically
>
>>>>> meaningful
>>>>> results in an ABY test.
>>>>>
>>>>> One of the great things about that DAWSUM CD is it has let me use the
>>>
>>>>> software
>>>>> that I like the most, without worrying too much about the sound. That
>>>
>>>>> would
>>>>> be Ableton Live most of the time, with SX as a backup if the editing
>>gets
>>>>> more intense. For me that alone was worth the time I spent on the DAWSUM
>>>>> CD.
>>>>>
>>>>> TCB
>>>>>
>>>>> "LaMont" <jjdpro@ameritech.net> wrote:
>>>>>>
>>>>>>Thad, you really can't hear the difference?? Maybe I own too many software
>>>>>>DAWS thru the years.
>>>>>>
>>>>>>Starting on Logic Audio 3.0, then to Cakwalk, Pro Tools, DP, Paris,Acid,
>>>>>>Neundo, Sonar, samplitude..
>>>>>>
>>>>>>I can hear the diference with the same audio interface with the same
>>wav
>>>>>>file(s)as oon as I import the file or files.
>>>>>>
>>>>>>These days, depending on the genre I'm mising determines which DAW
>>>>>>software
>>>>>>I'll use.
>>>>>>My circle of engineer and producer buddies all can hear the difference
>>>in
>>>>>>a second. Just the other day, we were mixing this R&B(ish)Gospel track
>>>and
>>>>>>somebody said, 'Mont, this is begging for Paris. Another track, the
>call
>>>>>>was for Pro Tools. And another,Nuendo..
>>>>>>I know BrianT feels and hears the same way in different DAW software.
>>>It's
>>>>>>really obvious..
>>>>>>
>>>>>>
>>>>>>"TCB" <nobody@ishere.com> wrote:
>>>>>>>
>>>>>>>Which is why ABY testing uses expert listeners instead of scopes and
>>>
>>>>>>>graphs.
>>>>>>>
>>>>>>>
>>>>>>>I'm not saying you're wrong, esp. about ITB vs external summing. One
>>>
>>>>>>>would
>>>>>>>expect that to sound at least slightly different. But I would be
>>>>>>>absolutely
>>>>>>>shocked if anyone could tell in a controlled ABY test whether they
>were
>>>>>>listening
>>>>>>>to SX, Performer, of Sonar.
>>>>>>>
>>>>>>>"LaMont" <jjdpro@ameritech.net> wrote:
>>>>>>>>
>>>>>>>>The only real test is with the ears and not scopes and graphs.
>>>>>>>>
>>>>>>>>"TCB" <nobody@ishere.com> wrote:
>>>>>>>>>
>>>>>>>>>I'd like to see this proven in a controlled ABY test.
>>>>>>>>>
>>>>>>>>>"Lamont" <jjdpro@ameritech.net> wrote:
>>>>>>>>>>
>>>>>>>>>>Hey Neil,
>>>>>>>>>>
>>>>>>>>>>All I'm saying is: All DAW software have their own unique sound.
>>
>>>>>>>>>>Despite
>>>>>>>>>>what those lame summing test shows..
>>>>>>>>>>
>>>>>>>>>>PT-HD has a very distinct sound. A very polished sound, with a
nice
>>>>> top
>>>>>>>>>end,
>>>>>>>>>>but with full audio spectrum represented. Mixer/Summing buss can
>>be
>>>>> pushed,
>>>>>>>>>>but you have to watch it.
>>>>>>>>>>
>>>>>>>>>>Nuendo/SX: Has a very Clear, 2 dimension sound, that does not hype
>>>the
>>>>>>>>top
>>>>>>>>>>nor bottom end.
>>>>>>>>>>
>>>>>>>>>>Logic Audio: Very Broad- Aggressive sound, that really works for
>>Rock
>>>>>>>and
>>>>>>>>>>R & B/Gospel mixes.
>>>>>>>>>>
>>>>>>>>>>Digital Performer: With their hardware, superb audio quality. Full
>>>
>>>>>>>>>>bodied
>>>>>>>>>>sound .
>>>>>>>>>>
>>>>>>>>>>Sonar: Very flat sounding. I would say that Sonar is your most
vanilla
>>>>>>>>sound
>>>>>>>>>>DW on the market..
>>>>>>>>>>
>>>>>>>>>>Samplitude : A little less top end than Pro Tools. Full bodied
3d
>>>
>>>>>>>>>>sound..
>>>>>>>>>>
>>>>>>>>>>Paris: Dark sounding in comparison to the the other DAWs. But,
has
>>>a
>>>>>>3d
>>>>>>>>>sound
>>>>>>>>>>quality that's full bodied.
>>>>>>>>>>
>>>>>>>>>>I feel that you asking SX to be something it's not with some analog
>>>>> summing.
>>>>>>>>>>Especialy for your genre of music..
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>"Neil" <IUOIU@OIU.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>"Lamont" <jjdpro@ameritech.net> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>"I'd disagree with you in this instance because I happen to think
>>>>> the
>>>>>>>>>Cubase
>>>>>>>>>>>>ones DO sound better."
>>>>>>>>>>>>
>>>>>>>>>>>>Then that SSL Engineer does not know what they are doing with
>board.
>>>>>>>>There's
>>>>>>>>>>>>no way a mix coming off of that board SSL should sound better
>than
>>>>>>a
>>>>>>>>ITB
>>>>>>>>>>>>Cubase SX mix..
>>>>>>>>>>>>
>>>>>>>>>>>>Sorry, that just does not jive. That engineer does not know how
>>>to
>>>>>>push
>>>>>>>>>>>he
>>>>>>>>>>>>SSL or just not familiar with it.
>>>>>>>>>>>
>>>>>>>>>>>You're not really paying attention, are you? It was the same
>>>>>>>>>>>engineer (me). And as far as whether or not I know how to use
>>>>>>>>>>>that particular board, I guess that would be a matter of
>>>>>>>>>>>your opinion. I don't think the SSL mixes are bad ones, I think
>>>>>>>>>>>they came out good; I just think that you can hear more detail
>>>>>>>>>>>in the ITB mixes in the examples I gave, and they have more
>>>>>>>>>>>wideband frequency content from top to bottom.
>>>>>>>>>>>
>>>>>>>>>>>Anyway, my point of that particular comparison wasn't to say
>>>>>>>>>>>"ITB mixes are better than using a large-format console that
>>>>>>>>>>>costs somewhere in the six-figure range", the point of it was
to
>>>>>>>>>>>address a signal-chain suggestion that Paul had... he had
>>>>>>>>>>>suggested perhaps that I needed to pick up a few pieces of
>>>>>>>>>>>killer vintage gear, and I was just demonstrating that I think
>>>>>>>>>>>the various signal chain components that I have here are on par
>>>>>>>>>>>with most anything that can be found in heavy-hitter studios...
>>>>>>>>>>>we used probably around $100k's worth of mics & pre's on the
>>>>>>>>>>>PTHD/SSL mixes, plus obviously you're looking at another
>>>>>>>>>>>roughly $100k for that particular console (40-channel E-series,
>>>>>>>>>>>black EQ's, w/G-series Computer & Total Recall package), add in
>>>>>>>>>>>the PTHD, outboard gear & whatnot, and you end up with
>>>>>>>>>>>somewhere around a quarter-mil's worth of equipment involved in
>>>>>>>>>>>that project. The project done at my place was done with my
>>>>>>>>>>>gear, which certainly doesn't tally up to anywhere remotely
>>>>>>>>>>>close to that cost & none of it bears a "vintage" stamp, but it
>>>>>>>>>>>sounds competitive with the project that used all the heavy-
>>>>>>>>>>>hitter stuff.
>>>>>>>>>>>
>>>>>>>>>>>Neil
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77277 is a reply to message #77273] |
Wed, 20 December 2006 17:30 |
Jamie K
Messages: 1115 Registered: July 2006
|
Senior Member |
|
|
To settle this, we're gonna have to get you two in the same room with
multiple DAWs in a double blind test and see if you can hear a
difference between the same exact stereo file playing back through the
same monitoring chain from different DAWs.
Cheers,
-Jamie
www.JamieKrutz.com
LaMont wrote:
> See Dedric, everthing is this life is not expalinable. Although we would like
> it to be 2+2 =4, the rality is that sometimes 2+2=4.40..Why, becuase the
> the math is flawed. Why is the math flawed? Becuase we as humans are flawed.
> Say what you will about the metric system, which is a great tool.But, sometimes
> working in inches and 16ths, 3/4s works better.
>
> When a guy like Roger Nichols bangs his preverbial head around this issue
> as to why his mix sound different being rendered from different and sometimes
> the same cd mastering devices is expalinable, however the explanation does
> not jive with the science.
> Are we to believe that the Science we have today about digital audio is
> the the Last word?? No.. In the future, some new science will come along
> and either rebuff our current science or enhance it.
>
> We I and other say.. We drop a stereo wav file in a given daw)(unity gain)
> using the same audio converter...We can hear the diference. And it's sonically
> obvious..
>
> Lynns test is flawed because of the Roger Nicohls CD mastering problem. Things
> change when going to render to CD.
>
> Hey some peole on this earth can hear and see better than others..That's
> just a fact
>
> "Dedric Terry" <dedric@echomg.com> wrote:
>> Of course Paris sounds different on Lynn's sampler, that was audible, and
>
>> there are technical reasons why Paris will always sound different, but I
>
>> didn't like it better on the sampler CD, to be honest, though the
>> differences were subtle. Also, we weren't talking about Acid vs. Sonar
>
>> specifically. I don't even bother with Acid as a DAW example - it's a loop
>
>> app. Vegas is a video app that has had life as an audio app to some degree,
>
>> but iMovie does audio as well, yet that doesn't really put it in the same
>
>> category as professional DAW apps like Nuendo, PTHD, Sequoia, etc. I use
>
>> Vegas for video, but not audio.
>>
>> On Lynn's sampler, Samplitude, Nuendo, Fairlight and the other natives don't
>
>> sound different and aren't different in the unity gain examples
>> (even the PTHD mix cancels with these). If you hear two files sounding
>
>> differently that cancel to complete null, an audio difference isn't what
> you
>> are hearing. When there are differences in non-unity gain mix summing
>> tests, you have an extra variable to account for - how is the gain
>> calculated? Gain
>> is non-linear (power), not adding two numbers together. So how is pan law
>
>> factored in, and where? Are your faders exactly the same, or 0.001dB
>> variant?
>>
>> Also if you drop the same stereo file in two different pro audio apps and
>
>> hear a difference, one of the two apps is defective. There is nothing
>> happening with a stereo file playback when no gain change or plugins are
>
>> active - just audio streaming to the driver from disk. If you hear a
>> difference there, I would be quickly trying to find out why. Something
> is
>> wrong.
>>
>> The point I am making is that these arguments usually come up as blanket
>
>> statements with no qualification of what exactly sounds
>> different, why it might, or solid well reasoned attempts to find out why,
> or
>> if there could be a real difference, or just a perceived one.
>>
>> Usually the "use your ears" comment comes up when there is no technical
>
>> rebuttal for when the science and good
>> ears agree. Of course "use your ears" first from a creative perspective,
>
>> but if you are making a technical, scientific statement, then such comments
>> aren't a good foundation to work from. It's a great motto, but a bit of
> a
>> cop out in a technical discussion.
>>
>> Regards,
>> Dedric
>>
>> "LaMont" <jjdpro@ameriech.net> wrote in message news:45897f73$1@linux...
>>> Hey Dedric and Neil,
>>>
>>> I reason I think that the Summing CD test(good intentions) were lame was
>>> because.. If a person can;t hear the difference btw a stereo wav file
>
>>> that's
>>> in Acid vs Sonar really needs a hearing test.
>>>
>>> For reason of my music work, I have to work with different DAWs, so I'm
>
>>> very
>>> familiar with their sound qualities. My circle of producers and engineers
>>> talk about the daw sonics all the time. It's really no big deal anymore..
>>>
>>> The same logic applies when Roger Nichols (a few) years back in his
>>> article
>>> about master CD's and that he found out that 4 differnt CD burners yeilded
>>> differnt sonic results. Sure, he sated that Math is the Math :) but, his
>>> and the masering engineers Ears told them soemthing was different.
>>> Hummm???
>>>
>>> Now, back to DAW sonics. I can hear the difference btw Paris and Nuendo
> vs
>>> Pro Tools, Logic audio.. There is no math to this, this is an ear
>>> thing..You
>>> either hear or you don't.. Simple.
>>> But, good ears can hear it. .
>>>
>>> I really think the problem is, noone want to no that their money that
>
>>> they've
>>> spent on a given DAW, has sonic limitations or shall we say, just
>>> different..
>>>
>>> I like that they all sound different. It's good to have choice when mixing
>>> a song. Some DAWs, depending on the genre will yield better or the desired
>>> results and than another.
>>> EX. I would not mix a Acoustic jazz record today with Paris..reason, I'm
>>> going for clarity at it's highest level.. For that project, It's either
>
>>> Neundo
>>> or Pro Tools and may Samplitude..Why should I fight with Paris's thick,
>
>>> gooy
>>> sonics, when I'm going for clarity. Well, Pro Tools and Nuendo/SX has
> that
>>> sound right out the gate.. Which makes my job a lot easier. simple. This
>>> is not tosay that I could not get the job done in Paris..i could..But,
> for
>>> that Acoutic Jazz project , the other 2 daws gives me what I'm looking
>
>>> for
>>> without even touching an eq..
>>>
>>> This is not all about math. As BrianT states: Use you ears..Forget the
>
>>> math..What
>>> does knowing the math do for you anyway? Nothing, it just proves that
> you
>>> know the math. Does not tell you diddly about the sonics.. Just ask Roger
>>> Nichols..
>>>
>>>
>>> "Dedric Terry" <d@nospam.net> wrote:
>>>> I know we disagree here Lamont and that's totally cool, so I won't take
>>> this
>>>> beyond this one response, and this isn't really directed to you, but my
>>> general
>>>> thoughts on the matter.
>>>>
>>>> In Neil's "defense" (not that he needs it), I and others have done this
>>> comparison
>>>> to death and the conclusion I've come to is that people are 80% influenced
>>>> by a change in environment (e.g. software interface) and 20% ears. Sorry
>>>> to say it, but the difference in sound between floating point DAWs is
> far
>>> >from real. It's just good, albeit unintentional marketing created by
>
>>>> users
>>>> and capitolized by manufacturers. Perceiving a "sound" in DAWs that in
>>> actuality
>>>> process data identically, is a bad reason to pick a DAW, but of course
>
>>>> there
>>>> is nothing wrong with thinking you hear a difference as long as it doesn't
>>>> become an unwritten law of engineering at large. Preferring to work with
>>>> one or the other, and "feeling" better about it for whatever reason is
> a
>>>> great reason to pick one DAW over another.
>>>>
>>>> There was a recent thread that Nuendo handled gain through groups
>>>> differently,
>>>> so I put Nuendo, Sonar 6 (both 32 and 64-bit engines) and Sequoia 8.3
> to
>>>> the test - identical tests, setup to the 1/100th of a dB identically and
>>>> came up with absolutely no difference, either audible or scientific.
> To
>>>> be honest, this was the one test where I could have said, yes there is
> an
>>>> understandable difference between DAWs in a simple math function, and
> the
>>>> only one in the DAW that actually might make sense, yet even that did
> not
>>>> exist. The reason - math is math. You can paint it red, blue, silver
> or
>>>> dull grey, but it's still the same math unless the programmer was high
> or
>>>> completely incompetent when they wrote the code.
>>>>
>>>> I thought it was entirely possible the original poster had found something
>>>> different in Nuendo, but when it came down to really understanding and
>
>>>> reproducing
>>>> what happens in DAW summing and gain structures accurately between each
>>> DAW,
>>>> there was none, nada, nil. The assertion was completely squashed. This
>
>>>> also
>>>> showed me how easy it is for a wide range of professionals to misinterpret
>>>> digital audio - whether hearing things, or just setting up a test with
> a
>>>> single missed variable that completely invalidates the whole process.
>>>>
>>>> If you hear a difference, great. I've thought I heard a difference doing
>>>> similar comparisons, then changed my perspective (nothing else - not
>>>> converters,
>>>> nothing - just reset my expectations, and switched back and forth) and
>
>>>> could
>>>> hear no difference.
>>>>
>>>> Just leave some room for other opinions when you post yours on this
>>>> subject
>>>> since it is very obvious that hearing is not as universally objective
> and
>>>> identically referenced as everyone might like to believe, and is highly
>>> visually
>>>> and environmentally affected. Some will hear differences in DAWs. There
>>>> are Cubase SX 3 users claiming Cubase 4 sounds different. Sigh. Then
>
>>>> they
>>>> realize they aren't even using the same project... or at least different
>>>> EQs, or etc, etc....
>>>>
>>>> Say what you want about published summing tests, but Lynn's tests are
> as
>>>> accurate as it gets, and that bears out in the results (all floating point
>>>> DAWs cancel and sound identical - if you are hearing a difference, you
> are
>>>> hearing things that aren't there, or you forgot to align their gain and
>>> placement).
>>>> I've worked with Lynn at least briefly enough to know his attention to
>>> detail.
>>>> In the same way people will disagree about PCs and Macs until neither
>
>>>> exists,
>>>> so will audio engineers disagree about DAWs. This is one debate that
> will
>>>> always exist as long as we have different ears, eyes, brains,... and
>>>> opinions.
>>>>
>>>>
>>>> What Neil has done is to prove that opinions are always going to differ
>>> (i.e.
>>>> no consensus on the "best" mix of the ones posted). And in truth everyone
>>>> has a different perception of sound in general - not everyone wants to
>
>>>> hear
>>>> things the same way, so we judge "best" from very different perspectives.
>>>> There is no single gold standard. There are variations and mutated
>>>> combinations,
>>>> but all are subjective. That in and of itself implies very distinctly
>
>>>> that
>>>> people can and will even perceive the exact same sound differently if
>
>>>> presented
>>>> with any variable that changes the brain's interpretation, even if just
>>> a
>>>> visual distraction. Just change the lights in the room and see if you
>
>>>> perceive
>>>> a song differently played back exactly the same way. Or have a cat run
>>> across
>>>> a desk while listening. Whether you care to admit it or not, it is there,
>>>> and that is actually the beauty of how our sense interact to create
>>>> perception.
>>>> That may be our undoing with DAW comparison tests, but it's also what
>
>>>> keeps
>>>> music fresh and creative, when we allow it to.
>>>>
>>>> So my suggestion is to use what makes you most creative, even if it's
> just
>>>> a "feeling" working with that DAW gives you - be it the workflow, the
> GUI,
>>>> or even the name brand reputation. But, as we all know, if you can't
> make
>>>> most any material sound good on whatever DAW you choose, the DAW isn't
> the
>>>> problem.
>>>>
>>>> Regards,
>>>> Dedric
>>>>
>>>> "Neil" <IUOIU@OIU.com> wrote:
>>>>> That's interesting - all those DAW sonic interpretations, I
>>>>> mean... I haven't had a chance to usee all of those, so it's
>>>>> good information.
>>>>>
>>>>> I still don't understand why you consider my summing
>>>>> comparisons "lame", however - it was a fair set of tests;
>>>>> the same mix summed in different ways. Not trying to prove a
>>>>> point or to rig it so one sounded any better than the other - in
>>>>> fact, if you recall the thread, different people liked different
>>>>> summed versions for different reasons... there wasn't any one
>>>>> version that stood out as being "the one" that everyone felt
>>>>> sounded better. The only reason I didn't come right out & say
>>>>> right away which version was which is so that I didn't bias
>>>>> anyone's opinion beforehand by mentioning that... NOT to try
>>>>> & "hide" anything or "trick" anyone, as you accused me of
>>>>>
>>>>> Sheesh!
>>>>>
>>>>> Neil
>>>>>
>>>>>
>>>>> "Lamont" <jjdpro@ameritech.net> wrote:
>>>>>> Hey Neil,
>>>>>>
>>>>>> All I'm saying is: All DAW software have their own unique sound.
>>>>>> Despite
>>>>>> what those lame summing test shows..
>>>>>>
>>>>>> PT-HD has a very distinct sound. A very polished sound, with a nice
> top
>>>>> end,
>>>>>> but with full audio spectrum represented. Mixer/Summing buss can be
>
>>>>>> pushed,
>>>>>> but you have to watch it.
>>>>>>
>>>>>> Nuendo/SX: Has a very Clear, 2 dimension sound, that does not hype the
>>>> top
>>>>>> nor bottom end.
>>>>>>
>>>>>> Logic Audio: Very Broad- Aggressive sound, that really works for Rock
>>> and
>>>>>> R & B/Gospel mixes.
>>>>>>
>>>>>> Digital Performer: With their hardware, superb audio quality. Full
>>>>>> bodied
>>>>>> sound .
>>>>>>
>>>>>> Sonar: Very flat sounding. I would say that Sonar is your most vanilla
>>>> sound
>>>>>> DW on the market..
>>>>>>
>>>>>> Samplitude : A little less top end than Pro Tools. Full bodied 3d
>>>>>> sound..
>>>>>>
>>>>>> Paris: Dark sounding in comparison to the the other DAWs. But, has a
> 3d
>>>>> sound
>>>>>> quality that's full bodied.
>>>>>>
>>>>>> I feel that you asking SX to be something it's not with some analog
>
>>>>>> summing.
>>>>>> Especialy for your genre of music..
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> "Neil" <IUOIU@OIU.com> wrote:
>>>>>>> "Lamont" <jjdpro@ameritech.net> wrote:
>>>>>>>> "I'd disagree with you in this instance because I happen to think
> the
>>>>> Cubase
>>>>>>>> ones DO sound better."
>>>>>>>>
>>>>>>>> Then that SSL Engineer does not know what they are doing with board.
>>>> There's
>>>>>>>> no way a mix coming off of that board SSL should sound better than
> a
>>>> ITB
>>>>>>>> Cubase SX mix..
>>>>>>>>
>>>>>>>> Sorry, that just does not jive. That engineer does not know how to
>
>>>>>>>> push
>>>>>>> he
>>>>>>>> SSL or just not familiar with it.
>>>>>>> You're not really paying attention, are you? It was the same
>>>>>>> engineer (me). And as far as whether or not I know how to use
>>>>>>> that particular board, I guess that would be a matter of
>>>>>>> your opinion. I don't think the SSL mixes are bad ones, I think
>>>>>>> they came out good; I just think that you can hear more detail
>>>>>>> in the ITB mixes in the examples I gave, and they have more
>>>>>>> wideband frequency content from top to bottom.
>>>>>>>
>>>>>>> Anyway, my point of that particular comparison wasn't to say
>>>>>>> "ITB mixes are better than using a large-format console that
>>>>>>> costs somewhere in the six-figure range", the point of it was to
>>>>>>> address a signal-chain suggestion that Paul had... he had
>>>>>>> suggested perhaps that I needed to pick up a few pieces of
>>>>>>> killer vintage gear, and I was just demonstrating that I think
>>>>>>> the various signal chain components that I have here are on par
>>>>>>> with most anything that can be found in heavy-hitter studios...
>>>>>>> we used probably around $100k's worth of mics & pre's on the
>>>>>>> PTHD/SSL mixes, plus obviously you're looking at another
>>>>>>> roughly $100k for that particular console (40-channel E-series,
>>>>>>> black EQ's, w/G-series Computer & Total Recall package), add in
>>>>>>> the PTHD, outboard gear & whatnot, and you end up with
>>>>>>> somewhere around a quarter-mil's worth of equipment involved in
>>>>>>> that project. The project done at my place was done with my
>>>>>>> gear, which certainly doesn't tally up to anywhere remotely
>>>>>>> close to that cost & none of it bears a "vintage" stamp, but it
>>>>>>> sounds competitive with the project that used all the heavy-
>>>>>>> hitter stuff.
>>>>>>>
>>>>>>> Neil
>>
>
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77278 is a reply to message #77273] |
Wed, 20 December 2006 18:23 |
Nil
Messages: 245 Registered: March 2007
|
Senior Member |
|
|
"LaMont" <jjdpro@ameritech.net> wrote:
>
>Lynns test is flawed because of the Roger Nicohls CD mastering
>problem. Things change when going to render to CD.
I don't know that that makes those tests flawed - you're
talking about another step being inserted into the process.
The additional step of mastering presents a whole different
set of issues unto itself.... doesn't necessarily mean that
tests done on steps involved prior to mastering are
automatically invalid.
>Hey some peole on this earth can hear and see better than
>others..That's just a fact
Let's separate the esoteric from the mundane here... I, for
one, am not necessarily interested in only those who can hear
better, I'm interested in those with average hearing
and "untrained" ears, too. Do I have "good ears"? Yeah,
I guess. "Golden Ears"? Probably not. Yet, in the comparison
files I did, I think its all safe to say we were ALL reduced to
an equal footing with those of average hearing & "untrained
ears", simply because they were rendered to an mp3 format,
that's incapable of reproducing the frequency range & fidelity
of the original files... yet EVERYONE heard a difference
between one clip & another. Again, some liked one version or
another for different reasons, but my point is: there was
indeed a perceivable difference between them, even though we
were all reduced to a lesser set of capabilities, hearing-wise,
than if we all had the original wav files to listen to.
THIS is what I'm talking about - not something that only makes
a difference to the "Golden Ears" types, but something that
enhances the listening pleasure to everyone, even the average
Joe who may listen to it. I don't know that subtle differences
between one Native DAW & another - whether they really exist or
not - are going to make this kind of level of change in
what each contributes to a given mix. I've said before that
when it comes to tracking, it's all about the convertors, when
it comes to mixing, it's all about the summing... maybe that's
too broad of an empirical statement to be 100% accurate, but I
think there's a lot of truth to it.
I also don't know that whether two files "null" to 100% or not
is the only test of if "a" must therefore sound just
like "b"... let's face it, if something nulls, all it means is
that every peak is equal in amplitude... but what's going on
BELOW the peaks? What does it sound like at 200hz @ 5db down
from the peak at that frequency, for example? What is something
DOING to the sound at perhaps a lower (but still audible)
level, as opposed to at what amplitude is it outputting the
sound is something that a null test can't always address, IMO.
So I guess I've made some points supporting both sides of the
argument... fight on! lol
Neil
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77279 is a reply to message #77273] |
Wed, 20 December 2006 18:17 |
Dedric Terry
Messages: 788 Registered: June 2007
|
Senior Member |
|
|
"LaMont" <jjdpro@ameritech.net> wrote in message news:4589d1aa$1@linux...
>
> See Dedric, everthing is this life is not expalinable. Although we would
> like
> it to be 2+2 =4, the rality is that sometimes 2+2=4.40..Why, becuase the
> the math is flawed. Why is the math flawed? Becuase we as humans are
> flawed.
You can wax philosophical all you want, but I think you would find a strong
argument
with a lot of very knowledgeable engineers, programmers and mathematicians
on that -
my college professors in math, engineering and digital signal processing
being among those.
Saying that not everything is explanable is saying that
software is partly unpredictable and has a mind and behavior of its' own.
It doesn't in this case (no neural nets, or ai going on here). Sure, you
can program audio
processing differently when that is the goal, but you can easily determine
when that happens,
and when it doesn't.
I don't agree that just because a guy is well know that he has the last word
on the issue.
I also don't agree with your supposed assessment that we disagree because
"some people hear
better than others". You have no way of knowing what and how well I hear,
or how
well, or how baised and subjectively other engineers you quote might hear.
When I have
more time we'll put this to the test. I'll post something to listen to and
we'll find find out
what is heard and what isn't.
Also, come on, 2+2 is never 4.40. That's not even in the ballpark of being
a logical and reasonable
analogy. We might as well claim that blue sounds better than green, or the
sun only circles the earth
when the man in the moon is making cheddar cheese.
You also threw in Roger Nichols' CD mastering test as a reason to discount
Lynn's summing test,
but that has nothing to do with pulling files off of the *same* CD, hearing,
and watching them cancel
to null, so that is a poor arguement as well.
Lamont, I've read some very well respected engineers, RN being one of them,
claim some absolute
crap in this realm, and back it up with highly suspect comparisons. I'm also
not
the only one who has noticed this, but out of respect we hold our tongues
and shake our heads, quietly trying
to keep the rest of the audio community from succumbing to audio myths
rather than practicing
intelligent engineering. I don't know everything - far from it - but I do
know what I hear, and how to connect
it with what I see and what I do know.
Why do I use scientific comparisons such as phase cancellation and isolating
variables to make my comparisons?
To rule out perception, visual distraction, and preconceived ideas built by
reading newsgroups, articles
by "famous" engineers, and peer pressure. I don't disagree that DAWs can be
designed to sound differently, but
when they aren't, *and* we verify that they aren't, sorry, there is nothing
left to be different other than the GUI,
marketing and street hype.
Maybe someday I'll win a grammy or write a hit song for a ditzy pop starlet
so my opinion will carry some weight. ;-)
In the mean time, just don't believe everything you read, and only half of
what you hear.
Dedric
> Say what you will about the metric system, which is a great tool.But,
> sometimes
> working in inches and 16ths, 3/4s works better.
>
> When a guy like Roger Nichols bangs his preverbial head around this issue
> as to why his mix sound different being rendered from different and
> sometimes
> the same cd mastering devices is expalinable, however the explanation does
> not jive with the science.
> Are we to believe that the Science we have today about digital audio is
> the the Last word?? No.. In the future, some new science will come along
> and either rebuff our current science or enhance it.
>
> We I and other say.. We drop a stereo wav file in a given daw)(unity gain)
> using the same audio converter...We can hear the diference. And it's
> sonically
> obvious..
>
> Lynns test is flawed because of the Roger Nicohls CD mastering problem.
> Things
> change when going to render to CD.
>
> Hey some peole on this earth can hear and see better than others..That's
> just a fact
>
> "Dedric Terry" <dedric@echomg.com> wrote:
>>Of course Paris sounds different on Lynn's sampler, that was audible, and
>
>>there are technical reasons why Paris will always sound different, but I
>
>>didn't like it better on the sampler CD, to be honest, though the
>>differences were subtle. Also, we weren't talking about Acid vs. Sonar
>
>>specifically. I don't even bother with Acid as a DAW example - it's a
>>loop
>
>>app. Vegas is a video app that has had life as an audio app to some
>>degree,
>
>>but iMovie does audio as well, yet that doesn't really put it in the same
>
>>category as professional DAW apps like Nuendo, PTHD, Sequoia, etc. I use
>
>>Vegas for video, but not audio.
>>
>>On Lynn's sampler, Samplitude, Nuendo, Fairlight and the other natives
>>don't
>
>>sound different and aren't different in the unity gain examples
>>(even the PTHD mix cancels with these). If you hear two files sounding
>
>>differently that cancel to complete null, an audio difference isn't what
> you
>>are hearing. When there are differences in non-unity gain mix summing
>>tests, you have an extra variable to account for - how is the gain
>>calculated? Gain
>>is non-linear (power), not adding two numbers together. So how is pan law
>
>>factored in, and where? Are your faders exactly the same, or 0.001dB
>>variant?
>>
>>Also if you drop the same stereo file in two different pro audio apps and
>
>>hear a difference, one of the two apps is defective. There is nothing
>>happening with a stereo file playback when no gain change or plugins are
>
>>active - just audio streaming to the driver from disk. If you hear a
>>difference there, I would be quickly trying to find out why. Something
> is
>>wrong.
>>
>>The point I am making is that these arguments usually come up as blanket
>
>>statements with no qualification of what exactly sounds
>>different, why it might, or solid well reasoned attempts to find out why,
> or
>>if there could be a real difference, or just a perceived one.
>>
>>Usually the "use your ears" comment comes up when there is no technical
>
>>rebuttal for when the science and good
>>ears agree. Of course "use your ears" first from a creative perspective,
>
>>but if you are making a technical, scientific statement, then such
>>comments
>>aren't a good foundation to work from. It's a great motto, but a bit of
> a
>>cop out in a technical discussion.
>>
>>Regards,
>>Dedric
>>
>>"LaMont" <jjdpro@ameriech.net> wrote in message news:45897f73$1@linux...
>>>
>>> Hey Dedric and Neil,
>>>
>>> I reason I think that the Summing CD test(good intentions) were lame was
>>> because.. If a person can;t hear the difference btw a stereo wav file
>
>>> that's
>>> in Acid vs Sonar really needs a hearing test.
>>>
>>> For reason of my music work, I have to work with different DAWs, so I'm
>
>>> very
>>> familiar with their sound qualities. My circle of producers and
>>> engineers
>>> talk about the daw sonics all the time. It's really no big deal
>>> anymore..
>>>
>>> The same logic applies when Roger Nichols (a few) years back in his
>>> article
>>> about master CD's and that he found out that 4 differnt CD burners
>>> yeilded
>>> differnt sonic results. Sure, he sated that Math is the Math :) but, his
>>> and the masering engineers Ears told them soemthing was different.
>>> Hummm???
>>>
>>> Now, back to DAW sonics. I can hear the difference btw Paris and Nuendo
> vs
>>> Pro Tools, Logic audio.. There is no math to this, this is an ear
>>> thing..You
>>> either hear or you don't.. Simple.
>>> But, good ears can hear it. .
>>>
>>> I really think the problem is, noone want to no that their money that
>
>>> they've
>>> spent on a given DAW, has sonic limitations or shall we say, just
>>> different..
>>>
>>> I like that they all sound different. It's good to have choice when
>>> mixing
>>> a song. Some DAWs, depending on the genre will yield better or the
>>> desired
>>> results and than another.
>>> EX. I would not mix a Acoustic jazz record today with Paris..reason, I'm
>>> going for clarity at it's highest level.. For that project, It's either
>
>>> Neundo
>>> or Pro Tools and may Samplitude..Why should I fight with Paris's thick,
>
>>> gooy
>>> sonics, when I'm going for clarity. Well, Pro Tools and Nuendo/SX has
> that
>>> sound right out the gate.. Which makes my job a lot easier. simple. This
>>> is not tosay that I could not get the job done in Paris..i could..But,
> for
>>> that Acoutic Jazz project , the other 2 daws gives me what I'm looking
>
>>> for
>>> without even touching an eq..
>>>
>>> This is not all about math. As BrianT states: Use you ears..Forget the
>
>>> math..What
>>> does knowing the math do for you anyway? Nothing, it just proves that
> you
>>> know the math. Does not tell you diddly about the sonics.. Just ask
>>> Roger
>>> Nichols..
>>>
>>>
>>> "Dedric Terry" <d@nospam.net> wrote:
>>>>
>>>>I know we disagree here Lamont and that's totally cool, so I won't take
>>> this
>>>>beyond this one response, and this isn't really directed to you, but my
>>> general
>>>>thoughts on the matter.
>>>>
>>>>In Neil's "defense" (not that he needs it), I and others have done this
>>> comparison
>>>>to death and the conclusion I've come to is that people are 80%
>>>>influenced
>>>>by a change in environment (e.g. software interface) and 20% ears.
>>>>Sorry
>>>>to say it, but the difference in sound between floating point DAWs is
> far
>>>>from real. It's just good, albeit unintentional marketing created by
>
>>>>users
>>>>and capitolized by manufacturers. Perceiving a "sound" in DAWs that in
>>> actuality
>>>>process data identically, is a bad reason to pick a DAW, but of course
>
>>>>there
>>>>is nothing wrong with thinking you hear a difference as long as it
>>>>doesn't
>>>>become an unwritten law of engineering at large. Preferring to work
>>>>with
>>>>one or the other, and "feeling" better about it for whatever reason is
> a
>>>>great reason to pick one DAW over another.
>>>>
>>>>There was a recent thread that Nuendo handled gain through groups
>>>>differently,
>>>>so I put Nuendo, Sonar 6 (both 32 and 64-bit engines) and Sequoia 8.3
> to
>>>>the test - identical tests, setup to the 1/100th of a dB identically and
>>>>came up with absolutely no difference, either audible or scientific.
> To
>>>>be honest, this was the one test where I could have said, yes there is
> an
>>>>understandable difference between DAWs in a simple math function, and
> the
>>>>only one in the DAW that actually might make sense, yet even that did
> not
>>>>exist. The reason - math is math. You can paint it red, blue, silver
> or
>>>>dull grey, but it's still the same math unless the programmer was high
> or
>>>>completely incompetent when they wrote the code.
>>>>
>>>>I thought it was entirely possible the original poster had found
>>>>something
>>>>different in Nuendo, but when it came down to really understanding and
>
>>>>reproducing
>>>>what happens in DAW summing and gain structures accurately between each
>>> DAW,
>>>>there was none, nada, nil. The assertion was completely squashed. This
>
>>>>also
>>>>showed me how easy it is for a wide range of professionals to
>>>>misinterpret
>>>>digital audio - whether hearing things, or just setting up a test with
> a
>>>>single missed variable that completely invalidates the whole process.
>>>>
>>>>If you hear a difference, great. I've thought I heard a difference
>>>>doing
>>>>similar comparisons, then changed my perspective (nothing else - not
>>>>converters,
>>>>nothing - just reset my expectations, and switched back and forth) and
>
>>>>could
>>>>hear no difference.
>>>>
>>>>Just leave some room for other opinions when you post yours on this
>>>>subject
>>>>since it is very obvious that hearing is not as universally objective
> and
>>>>identically referenced as everyone might like to believe, and is highly
>>> visually
>>>>and environmentally affected. Some will hear differences in DAWs.
>>>>There
>>>>are Cubase SX 3 users claiming Cubase 4 sounds different. Sigh. Then
>
>>>>they
>>>>realize they aren't even using the same project... or at least different
>>>>EQs, or etc, etc....
>>>>
>>>>Say what you want about published summing tests, but Lynn's tests are
> as
>>>>accurate as it gets, and that bears out in the results (all floating
>>>>point
>>>>DAWs cancel and sound identical - if you are hearing a difference, you
> are
>>>>hearing things that aren't there, or you forgot to align their gain and
>>> placement).
>>>> I've worked with Lynn at least briefly enough to know his attention to
>>> detail.
>>>> In the same way people will disagree about PCs and Macs until neither
>
>>>> exists,
>>>>so will audio engineers disagree about DAWs. This is one debate that
> will
>>>>always exist as long as we have different ears, eyes, brains,... and
>>>>opinions.
>>>>
>>>>
>>>>What Neil has done is to prove that opinions are always going to differ
>>> (i.e.
>>>>no consensus on the "best" mix of the ones posted). And in truth
>>>>everyone
>>>>has a different perception of sound in general - not everyone wants to
>
>>>>hear
>>>>things the same way, so we judge "best" from very different
>>>>perspectives.
>>>> There is no single gold standard. There are variations and mutated
>>>> combinations,
>>>>but all are subjective. That in and of itself implies very distinctly
>
>>>>that
>>>>people can and will even perceive the exact same sound differently if
>
>>>>presented
>>>>with any variable that changes the brain's interpretation, even if just
>>> a
>>>>visual distraction. Just change the lights in the room and see if you
>
>>>>perceive
>>>>a song differently played back exactly the same way. Or have a cat run
>>> across
>>>>a desk while listening. Whether you care to admit it or not, it is
>>>>there,
>>>>and that is actually the beauty of how our sense interact to create
>>>>perception.
>>>> That may be our undoing with DAW comparison tests, but it's also what
>
>>>> keeps
>>>>music fresh and creative, when we allow it to.
>>>>
>>>>So my suggestion is to use what makes you most creative, even if it's
> just
>>>>a "feeling" working with that DAW gives you - be it the workflow, the
> GUI,
>>>>or even the name brand reputation. But, as we all know, if you can't
> make
>>>>most any material sound good on whatever DAW you choose, the DAW isn't
> the
>>>>problem.
>>>>
>>>>Regards,
>>>>Dedric
>>>>
>>>>"Neil" <IUOIU@OIU.com> wrote:
>>>>>
>>>>>That's interesting - all those DAW sonic interpretations, I
>>>>>mean... I haven't had a chance to usee all of those, so it's
>>>>>good information.
>>>>>
>>>>>I still don't understand why you consider my summing
>>>>>comparisons "lame", however - it was a fair set of tests;
>>>>>the same mix summed in different ways. Not trying to prove a
>>>>>point or to rig it so one sounded any better than the other - in
>>>>>fact, if you recall the thread, different people liked different
>>>>>summed versions for different reasons... there wasn't any one
>>>>>version that stood out as being "the one" that everyone felt
>>>>>sounded better. The only reason I didn't come right out & say
>>>>>right away which version was which is so that I didn't bias
>>>>>anyone's opinion beforehand by mentioning that... NOT to try
>>>>>& "hide" anything or "trick" anyone, as you accused me of
>>>>>
>>>>>Sheesh!
>>>>>
>>>>>Neil
>>>>>
>>>>>
>>>>>"Lamont" <jjdpro@ameritech.net> wrote:
>>>>>>
>>>>>>Hey Neil,
>>>>>>
>>>>>>All I'm saying is: All DAW software have their own unique sound.
>>>>>>Despite
>>>>>>what those lame summing test shows..
>>>>>>
>>>>>>PT-HD has a very distinct sound. A very polished sound, with a nice
> top
>>>>>end,
>>>>>>but with full audio spectrum represented. Mixer/Summing buss can be
>
>>>>>>pushed,
>>>>>>but you have to watch it.
>>>>>>
>>>>>>Nuendo/SX: Has a very Clear, 2 dimension sound, that does not hype the
>>>>top
>>>>>>nor bottom end.
>>>>>>
>>>>>>Logic Audio: Very Broad- Aggressive sound, that really works for Rock
>>> and
>>>>>>R & B/Gospel mixes.
>>>>>>
>>>>>>Digital Performer: With their hardware, superb audio quality. Full
>>>>>>bodied
>>>>>>sound .
>>>>>>
>>>>>>Sonar: Very flat sounding. I would say that Sonar is your most vanilla
>>>>sound
>>>>>>DW on the market..
>>>>>>
>>>>>>Samplitude : A little less top end than Pro Tools. Full bodied 3d
>>>>>>sound..
>>>>>>
>>>>>>Paris: Dark sounding in comparison to the the other DAWs. But, has a
> 3d
>>>>>sound
>>>>>>quality that's full bodied.
>>>>>>
>>>>>>I feel that you asking SX to be something it's not with some analog
>
>>>>>>summing.
>>>>>>Especialy for your genre of music..
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>"Neil" <IUOIU@OIU.com> wrote:
>>>>>>>
>>>>>>>"Lamont" <jjdpro@ameritech.net> wrote:
>>>>>>>>
>>>>>>>>"I'd disagree with you in this instance because I happen to think
> the
>>>>>Cubase
>>>>>>>>ones DO sound better."
>>>>>>>>
>>>>>>>>Then that SSL Engineer does not know what they are doing with board.
>>>>There's
>>>>>>>>no way a mix coming off of that board SSL should sound better than
> a
>>>>ITB
>>>>>>>>Cubase SX mix..
>>>>>>>>
>>>>>>>>Sorry, that just does not jive. That engineer does not know how to
>
>>>>>>>>push
>>>>>>>he
>>>>>>>>SSL or just not familiar with it.
>>>>>>>
>>>>>>>You're not really paying attention, are you? It was the same
>>>>>>>engineer (me). And as far as whether or not I know how to use
>>>>>>>that particular board, I guess that would be a matter of
>>>>>>>your opinion. I don't think the SSL mixes are bad ones, I think
>>>>>>>they came out good; I just think that you can hear more detail
>>>>>>>in the ITB mixes in the examples I gave, and they have more
>>>>>>>wideband frequency content from top to bottom.
>>>>>>>
>>>>>>>Anyway, my point of that particular comparison wasn't to say
>>>>>>>"ITB mixes are better than using a large-format console that
>>>>>>>costs somewhere in the six-figure range", the point of it was to
>>>>>>>address a signal-chain suggestion that Paul had... he had
>>>>>>>suggested perhaps that I needed to pick up a few pieces of
>>>>>>>killer vintage gear, and I was just demonstrating that I think
>>>>>>>the various signal chain components that I have here are on par
>>>>>>>with most anything that can be found in heavy-hitter studios...
>>>>>>>we used probably around $100k's worth of mics & pre's on the
>>>>>>>PTHD/SSL mixes, plus obviously you're looking at another
>>>>>>>roughly $100k for that particular console (40-channel E-series,
>>>>>>>black EQ's, w/G-series Computer & Total Recall package), add in
>>>>>>>the PTHD, outboard gear & whatnot, and you end up with
>>>>>>>somewhere around a quarter-mil's worth of equipment involved in
>>>>>>>that project. The project done at my place was done with my
>>>>>>>gear, which certainly doesn't tally up to anywhere remotely
>>>>>>>close to that cost & none of it bears a "vintage" stamp, but it
>>>>>>>sounds competitive with the project that used all the heavy-
>>>>>>>hitter stuff.
>>>>>>>
>>>>>>>Neil
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77280 is a reply to message #77278] |
Wed, 20 December 2006 18:33 |
Dedric Terry
Messages: 788 Registered: June 2007
|
Senior Member |
|
|
"Neil" <IUOIU@OIU.com> wrote in message news:4589e206$1@linux...
> I also don't know that whether two files "null" to 100% or not
> is the only test of if "a" must therefore sound just
> like "b"... let's face it, if something nulls, all it means is
> that every peak is equal in amplitude... but what's going on
> BELOW the peaks? What does it sound like at 200hz @ 5db down
> from the peak at that frequency, for example? What is something
Neil, the digital signal is represented by more than peak amplitude in order
to
represent modulation, period and phase of the waveform, so phase
cancellation
has to compare more than peaks. Pull up a 44.1k, 24-bit file in Cool Edit
or Audition
and zoom - you'll see sample points all along the wave for anything below
20kHz.
All of those sample points have to cancel for a phase invert test to cancel
completely.
The point at which we lose any resolution is the last bit, and only when
that bit is change.
When comparing two copies of the same file, or identical files, neither with
any gain change,
with one inverted, we aren't changing any bits, unless the audio app is
severely flawed, so
even quantization noise is out. Even in tests in Nuendo changing the gain
of a file and making
up that gain later, and comparing it to a phase inverted copy of that file
shows that only
quantization noise below -144dB remains, and only a few peaks
reach -136dB (which is the point at which we see deviation in the last bit
due to truncation
in the gain change process).
Dedric
> DOING to the sound at perhaps a lower (but still audible)
> level, as opposed to at what amplitude is it outputting the
> sound is something that a null test can't always address, IMO.
>
> So I guess I've made some points supporting both sides of the
> argument... fight on! lol
>
> Neil
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77281 is a reply to message #77280] |
Wed, 20 December 2006 19:49 |
Nil
Messages: 245 Registered: March 2007
|
Senior Member |
|
|
"Dedric Terry" <dedric@echomg.com> wrote:
>
>"Neil" <IUOIU@OIU.com> wrote in message news:4589e206$1@linux...
>
>> I also don't know that whether two files "null" to 100% or not
>> is the only test of if "a" must therefore sound just
>> like "b"... let's face it, if something nulls, all it means is
>> that every peak is equal in amplitude... but what's going on
>> BELOW the peaks? What does it sound like at 200hz @ 5db down
>> from the peak at that frequency, for example? What is something
>
>Neil, the digital signal is represented by more than peak amplitude in order
>to
>represent modulation, period and phase of the waveform, so phase
>cancellation
>has to compare more than peaks. Pull up a 44.1k, 24-bit file in Cool Edit
>or Audition
>and zoom - you'll see sample points all along the wave for anything below
>20kHz.
>
>All of those sample points have to cancel for a phase invert test to cancel
>completely.
At 44,1000 sample points per second, I'll bet you can have
PLENNNNTY of samples that "miss" or don't cancel completely &
still get a "null".
What is the sound of one sample clapping?
How about two?
How about twenty seven samples, all roughly evenly-spaced
across the couse of a second... can you hear that if the
difference between those 27 samples is an average of couple of
db each?
How far does it go in # of samples per second & difference in
db for each one before you can hear it? We don't really know,
do we? Who's done that test? No one. Ever.
Neil
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77282 is a reply to message #77281] |
Wed, 20 December 2006 19:45 |
Dedric Terry
Messages: 788 Registered: June 2007
|
Senior Member |
|
|
>
> How far does it go in # of samples per second & difference in
> db for each one before you can hear it? We don't really know,
> do we? Who's done that test? No one. Ever.
Well, that's what the software is doing when adding a phase inverted file to
it's original - inverting
each word (24 or 16 bit) at each sample point in the audio file, and
comparing them to the same
sample position in the non-inverted file (e.g. one by one, 44,100 of them
every second).
So, yes, we've all done exactly that, everytime we do a phase invert test
and bounce the output to 32-bit
floating point, or watch it on a realtime analyzer set to read below -144dB,
which is the end of where 24-bit
can represent. A null is nothing showing up even below that level. If you
zoom in on the analyzer you will
see single sample peaks on a phase cancellation test difference file, so
even if one shows up, you can see it.
The tests I did were completely blank down to -200 dB (far below the last
bit). It's safe to say there is no difference, even in
quantization noise, which by technical rights, is considered below the level
of "cancellation" in such tests.
As far as the question of how much, or rather how little change can we
hear - that is certainly
been debated. But that's where the scientific side comes into play (when
our ears mislead us too easily).
If you cancel it in software, you probably aren't hearing a difference in
reality. If software math is that
unreliable, then you would never be able to recall a mix and run down a
"duplicate" - even without random
variables such as reverb or delay, it wouldn't sound even close to the same.
Regards,
Dedric
"Nei" <IUOIU@OIU.com> wrote in message news:4589f623$1@linux...
> At 44,1000 sample points per second, I'll bet you can have
> PLENNNNTY of samples that "miss" or don't cancel completely &
> still get a "null".
>
> What is the sound of one sample clapping?
>
> How about two?
>
> How about twenty seven samples, all roughly evenly-spaced
> across the couse of a second... can you hear that if the
> difference between those 27 samples is an average of couple of
> db each?
>
> How far does it go in # of samples per second & difference in
> db for each one before you can hear it? We don't really know,
> do we? Who's done that test? No one. Ever.
>
>
> Neil
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77285 is a reply to message #77282] |
Wed, 20 December 2006 22:59 |
|
I'm sorry Dedric, but your statements below are full of hot Scientific hot
air balonga..
You should always trust your ears, and not some stupid math. This is music,
not a science project..Get the wax out and listen with your ears instead
of your scopes and graphs.. And, while your at it, how about acually working
with differnt DAW software to actually hear the difference. Your conclusions
that it's all "perception" is hoggwash..
And if you read RN's piece on CD mastering, you'd know he was stuck on math
as well, seeing as he's an Electical Engineer(EE).Software as a sound.
As far as the question of how much, or rather how little change can we hear
- that is certainly
been debated. But that's where the scientific side comes into play (when
our ears mislead us too easily).
If you cancel it in software, you probably aren't hearing a difference in
reality. If software math is that unreliable, then you would never be able
to recall a mix and run down a "duplicate" - even without random variables
such as reverb or delay, it wouldn't sound even close to the same.
"Dedric Terry" <dedric@echomg.com> wrote:
>>
>> How far does it go in # of samples per second & difference in
>> db for each one before you can hear it? We don't really know,
>> do we? Who's done that test? No one. Ever.
>
>Well, that's what the software is doing when adding a phase inverted file
to
>it's original - inverting
>each word (24 or 16 bit) at each sample point in the audio file, and
>comparing them to the same
>sample position in the non-inverted file (e.g. one by one, 44,100 of them
>every second).
>
>So, yes, we've all done exactly that, everytime we do a phase invert test
>and bounce the output to 32-bit
>floating point, or watch it on a realtime analyzer set to read below -144dB,
>which is the end of where 24-bit
>can represent. A null is nothing showing up even below that level. If
you
>zoom in on the analyzer you will
>see single sample peaks on a phase cancellation test difference file, so
>even if one shows up, you can see it.
>The tests I did were completely blank down to -200 dB (far below the last
>bit). It's safe to say there is no difference, even in
>quantization noise, which by technical rights, is considered below the level
>of "cancellation" in such tests.
>
>As far as the question of how much, or rather how little change can we
>hear - that is certainly
>been debated. But that's where the scientific side comes into play (when
>our ears mislead us too easily).
>If you cancel it in software, you probably aren't hearing a difference in
>reality. If software math is that
>unreliable, then you would never be able to recall a mix and run down a
>"duplicate" - even without random
>variables such as reverb or delay, it wouldn't sound even close to the same.
>
>Regards,
>Dedric
>
>"Nei" <IUOIU@OIU.com> wrote in message news:4589f623$1@linux...
>
>> At 44,1000 sample points per second, I'll bet you can have
>> PLENNNNTY of samples that "miss" or don't cancel completely &
>> still get a "null".
>>
>> What is the sound of one sample clapping?
>>
>> How about two?
>>
>> How about twenty seven samples, all roughly evenly-spaced
>> across the couse of a second... can you hear that if the
>> difference between those 27 samples is an average of couple of
>> db each?
>>
>> How far does it go in # of samples per second & difference in
>> db for each one before you can hear it? We don't really know,
>> do we? Who's done that test? No one. Ever.
>>
>>
>> Neil
>
>
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77286 is a reply to message #77285] |
Wed, 20 December 2006 22:52 |
Dedric Terry
Messages: 788 Registered: June 2007
|
Senior Member |
|
|
Lamont,
If you had read my post you would know that I do use my ears, and have used
pretty much every
DAW in this thread. You would also have read that I always recommend ears
first, and have
in several other threads. My ears are what I base my decisions on
exclusively. You obviously
don't understand the process of comparative testing or you would have
understood (as I stated) that the
"scopes and graphs" as you call them, come after the ears to confirm and/or
put a clearer face
on the "I think it sounds like this" perception. To be clear, since you
seem to be missing this point,
this "testing" is for the purpose of understanding what's behind the tools
we use, and sorting out
fact from myth - not for making mixing decisions.
You have offered nothing other than your opinion in broad, emotionally based
comments throughout this
thread with no specifics to back up your claim. You asserted a technical
claim with "use your ears" and
"software has a sound" instead of a firm grasp of what is actually being
discussed. I and others have proven
your blanket suppositions wrong in other situations several times, but it
isn't worth rehashing here given the
direction and lack of objectivity this discussion is taking. You hear what
you hear and that's fine, but I don't
appreciate your condescending tone here, so this will be my last post on
this topic here. Good luck!
Dedric
"LaMont" <jjdpro@gmail.com> wrote in message news:458a22bc$1@linux...
>
> I'm sorry Dedric, but your statements below are full of hot Scientific hot
> air balonga..
>
> You should always trust your ears, and not some stupid math. This is
> music,
> not a science project..Get the wax out and listen with your ears instead
> of your scopes and graphs.. And, while your at it, how about acually
> working
> with differnt DAW software to actually hear the difference. Your
> conclusions
> that it's all "perception" is hoggwash..
>
> And if you read RN's piece on CD mastering, you'd know he was stuck on
> math
> as well, seeing as he's an Electical Engineer(EE).Software as a sound.
>
>
> As far as the question of how much, or rather how little change can we
> hear
> - that is certainly
> been debated. But that's where the scientific side comes into play (when
> our ears mislead us too easily).
> If you cancel it in software, you probably aren't hearing a difference in
>
> reality. If software math is that unreliable, then you would never be able
> to recall a mix and run down a "duplicate" - even without random variables
> such as reverb or delay, it wouldn't sound even close to the same.
>
>
>
> "Dedric Terry" <dedric@echomg.com> wrote:
>>>
>>> How far does it go in # of samples per second & difference in
>>> db for each one before you can hear it? We don't really know,
>>> do we? Who's done that test? No one. Ever.
>>
>>Well, that's what the software is doing when adding a phase inverted file
> to
>>it's original - inverting
>>each word (24 or 16 bit) at each sample point in the audio file, and
>>comparing them to the same
>>sample position in the non-inverted file (e.g. one by one, 44,100 of them
>
>>every second).
>>
>>So, yes, we've all done exactly that, everytime we do a phase invert test
>
>>and bounce the output to 32-bit
>>floating point, or watch it on a realtime analyzer set to read
>>below -144dB,
>
>>which is the end of where 24-bit
>>can represent. A null is nothing showing up even below that level. If
> you
>>zoom in on the analyzer you will
>>see single sample peaks on a phase cancellation test difference file, so
>
>>even if one shows up, you can see it.
>>The tests I did were completely blank down to -200 dB (far below the last
>
>>bit). It's safe to say there is no difference, even in
>>quantization noise, which by technical rights, is considered below the
>>level
>
>>of "cancellation" in such tests.
>>
>>As far as the question of how much, or rather how little change can we
>>hear - that is certainly
>>been debated. But that's where the scientific side comes into play (when
>
>>our ears mislead us too easily).
>>If you cancel it in software, you probably aren't hearing a difference in
>
>>reality. If software math is that
>>unreliable, then you would never be able to recall a mix and run down a
>
>>"duplicate" - even without random
>>variables such as reverb or delay, it wouldn't sound even close to the
>>same.
>>
>>Regards,
>>Dedric
>>
>>"Nei" <IUOIU@OIU.com> wrote in message news:4589f623$1@linux...
>>
>>> At 44,1000 sample points per second, I'll bet you can have
>>> PLENNNNTY of samples that "miss" or don't cancel completely &
>>> still get a "null".
>>>
>>> What is the sound of one sample clapping?
>>>
>>> How about two?
>>>
>>> How about twenty seven samples, all roughly evenly-spaced
>>> across the couse of a second... can you hear that if the
>>> difference between those 27 samples is an average of couple of
>>> db each?
>>>
>>> How far does it go in # of samples per second & difference in
>>> db for each one before you can hear it? We don't really know,
>>> do we? Who's done that test? No one. Ever.
>>>
>>>
>>> Neil
>>
>>
>
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77288 is a reply to message #77282] |
Thu, 21 December 2006 05:53 |
Neil
Messages: 1645 Registered: April 2006
|
Senior Member |
|
|
"Dedric Terry" <dedric@echomg.com> wrote:
>The tests I did were completely blank down to -200 dB (far below the last
>bit). It's safe to say there is no difference, even in
>quantization noise, which by technical rights, is considered below the level
>of "cancellation" in such tests.
I'm not necessarily talking about just the first bit or the
last bit, but also everything in between... what happens on bit
#12, for example? Everything on bit #12 should be audible, but
in an a/b test what if thre are differences in what bits #8
through #12 sound like, but the amplutide is stll the same on
both files at that point, you'll get a null, right? Extrapolate
that out somewhat & let's say there are differences in bits #8
through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
etc through 43,972... Now this is breaking things down well
beyond what I think can be measured, if I'm not mistaken (I
dn't know of any way we could extract JUST that information
from each file & play it back for an a/b test; but would not
that be enough to have to "null-able" files that do actually
sound somewhat different?
I guess what I'm saying is that since each sample in a musical
track or full song file doesn't represent a pure, simple set of
content like a sample of a sine wave would - there's a whole
world of harmonic structure in each sample of a song file, and
I think (although I'll admit - I can't "prove") that there is
plenty of room for some variables between the first bit & the
last bit while still allowing for a null test to be successful.
No? Am I wacked out of my mind?
Neil
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77289 is a reply to message #77273] |
Thu, 21 December 2006 07:06 |
TCB
Messages: 1261 Registered: July 2007
|
Senior Member |
|
|
If you think data can't be written consistently, down to the last bit, over
and over on a CD you better check to see that your Social Security Number,
bank balances, credit card transactions, and paychecks are changing. Because
they need to if that's what you believe.
TCB
"LaMont" <jjdpro@ameritech.net> wrote:
>
>See Dedric, everthing is this life is not expalinable. Although we would
like
>it to be 2+2 =4, the rality is that sometimes 2+2=4.40..Why, becuase the
>the math is flawed. Why is the math flawed? Becuase we as humans are flawed.
>Say what you will about the metric system, which is a great tool.But, sometimes
>working in inches and 16ths, 3/4s works better.
>
>When a guy like Roger Nichols bangs his preverbial head around this issue
>as to why his mix sound different being rendered from different and sometimes
>the same cd mastering devices is expalinable, however the explanation does
>not jive with the science.
>Are we to believe that the Science we have today about digital audio is
>the the Last word?? No.. In the future, some new science will come along
>and either rebuff our current science or enhance it.
>
>We I and other say.. We drop a stereo wav file in a given daw)(unity gain)
>using the same audio converter...We can hear the diference. And it's sonically
>obvious..
>
>Lynns test is flawed because of the Roger Nicohls CD mastering problem.
Things
>change when going to render to CD.
>
>Hey some peole on this earth can hear and see better than others..That's
>just a fact
>
>"Dedric Terry" <dedric@echomg.com> wrote:
>>Of course Paris sounds different on Lynn's sampler, that was audible, and
>
>>there are technical reasons why Paris will always sound different, but
I
>
>>didn't like it better on the sampler CD, to be honest, though the
>>differences were subtle. Also, we weren't talking about Acid vs. Sonar
>
>>specifically. I don't even bother with Acid as a DAW example - it's a
loop
>
>>app. Vegas is a video app that has had life as an audio app to some degree,
>
>>but iMovie does audio as well, yet that doesn't really put it in the same
>
>>category as professional DAW apps like Nuendo, PTHD, Sequoia, etc. I use
>
>>Vegas for video, but not audio.
>>
>>On Lynn's sampler, Samplitude, Nuendo, Fairlight and the other natives
don't
>
>>sound different and aren't different in the unity gain examples
>>(even the PTHD mix cancels with these). If you hear two files sounding
>
>>differently that cancel to complete null, an audio difference isn't what
>you
>>are hearing. When there are differences in non-unity gain mix summing
>>tests, you have an extra variable to account for - how is the gain
>>calculated? Gain
>>is non-linear (power), not adding two numbers together. So how is pan
law
>
>>factored in, and where? Are your faders exactly the same, or 0.001dB
>>variant?
>>
>>Also if you drop the same stereo file in two different pro audio apps and
>
>>hear a difference, one of the two apps is defective. There is nothing
>>happening with a stereo file playback when no gain change or plugins are
>
>>active - just audio streaming to the driver from disk. If you hear a
>>difference there, I would be quickly trying to find out why. Something
>is
>>wrong.
>>
>>The point I am making is that these arguments usually come up as blanket
>
>>statements with no qualification of what exactly sounds
>>different, why it might, or solid well reasoned attempts to find out why,
>or
>>if there could be a real difference, or just a perceived one.
>>
>>Usually the "use your ears" comment comes up when there is no technical
>
>>rebuttal for when the science and good
>>ears agree. Of course "use your ears" first from a creative perspective,
>
>>but if you are making a technical, scientific statement, then such comments
>>aren't a good foundation to work from. It's a great motto, but a bit of
>a
>>cop out in a technical discussion.
>>
>>Regards,
>>Dedric
>>
>>"LaMont" <jjdpro@ameriech.net> wrote in message news:45897f73$1@linux...
>>>
>>> Hey Dedric and Neil,
>>>
>>> I reason I think that the Summing CD test(good intentions) were lame
was
>>> because.. If a person can;t hear the difference btw a stereo wav file
>
>>> that's
>>> in Acid vs Sonar really needs a hearing test.
>>>
>>> For reason of my music work, I have to work with different DAWs, so I'm
>
>>> very
>>> familiar with their sound qualities. My circle of producers and engineers
>>> talk about the daw sonics all the time. It's really no big deal anymore..
>>>
>>> The same logic applies when Roger Nichols (a few) years back in his
>>> article
>>> about master CD's and that he found out that 4 differnt CD burners yeilded
>>> differnt sonic results. Sure, he sated that Math is the Math :) but,
his
>>> and the masering engineers Ears told them soemthing was different.
>>> Hummm???
>>>
>>> Now, back to DAW sonics. I can hear the difference btw Paris and Nuendo
>vs
>>> Pro Tools, Logic audio.. There is no math to this, this is an ear
>>> thing..You
>>> either hear or you don't.. Simple.
>>> But, good ears can hear it. .
>>>
>>> I really think the problem is, noone want to no that their money that
>
>>> they've
>>> spent on a given DAW, has sonic limitations or shall we say, just
>>> different..
>>>
>>> I like that they all sound different. It's good to have choice when mixing
>>> a song. Some DAWs, depending on the genre will yield better or the desired
>>> results and than another.
>>> EX. I would not mix a Acoustic jazz record today with Paris..reason,
I'm
>>> going for clarity at it's highest level.. For that project, It's either
>
>>> Neundo
>>> or Pro Tools and may Samplitude..Why should I fight with Paris's thick,
>
>>> gooy
>>> sonics, when I'm going for clarity. Well, Pro Tools and Nuendo/SX has
>that
>>> sound right out the gate.. Which makes my job a lot easier. simple. This
>>> is not tosay that I could not get the job done in Paris..i could..But,
>for
>>> that Acoutic Jazz project , the other 2 daws gives me what I'm looking
>
>>> for
>>> without even touching an eq..
>>>
>>> This is not all about math. As BrianT states: Use you ears..Forget the
>
>>> math..What
>>> does knowing the math do for you anyway? Nothing, it just proves that
>you
>>> know the math. Does not tell you diddly about the sonics.. Just ask Roger
>>> Nichols..
>>>
>>>
>>> "Dedric Terry" <d@nospam.net> wrote:
>>>>
>>>>I know we disagree here Lamont and that's totally cool, so I won't take
>>> this
>>>>beyond this one response, and this isn't really directed to you, but
my
>>> general
>>>>thoughts on the matter.
>>>>
>>>>In Neil's "defense" (not that he needs it), I and others have done this
>>> comparison
>>>>to death and the conclusion I've come to is that people are 80% influenced
>>>>by a change in environment (e.g. software interface) and 20% ears. Sorry
>>>>to say it, but the difference in sound between floating point DAWs is
>far
>>>>from real. It's just good, albeit unintentional marketing created by
>
>>>>users
>>>>and capitolized by manufacturers. Perceiving a "sound" in DAWs that
in
>>> actuality
>>>>process data identically, is a bad reason to pick a DAW, but of course
>
>>>>there
>>>>is nothing wrong with thinking you hear a difference as long as it doesn't
>>>>become an unwritten law of engineering at large. Preferring to work
with
>>>>one or the other, and "feeling" better about it for whatever reason is
>a
>>>>great reason to pick one DAW over another.
>>>>
>>>>There was a recent thread that Nuendo handled gain through groups
>>>>differently,
>>>>so I put Nuendo, Sonar 6 (both 32 and 64-bit engines) and Sequoia 8.3
>to
>>>>the test - identical tests, setup to the 1/100th of a dB identically
and
>>>>came up with absolutely no difference, either audible or scientific.
>To
>>>>be honest, this was the one test where I could have said, yes there is
>an
>>>>understandable difference between DAWs in a simple math function, and
>the
>>>>only one in the DAW that actually might make sense, yet even that did
>not
>>>>exist. The reason - math is math. You can paint it red, blue, silver
>or
>>>>dull grey, but it's still the same math unless the programmer was high
>or
>>>>completely incompetent when they wrote the code.
>>>>
>>>>I thought it was entirely possible the original poster had found something
>>>>different in Nuendo, but when it came down to really understanding and
>
>>>>reproducing
>>>>what happens in DAW summing and gain structures accurately between each
>>> DAW,
>>>>there was none, nada, nil. The assertion was completely squashed. This
>
>>>>also
>>>>showed me how easy it is for a wide range of professionals to misinterpret
>>>>digital audio - whether hearing things, or just setting up a test with
>a
>>>>single missed variable that completely invalidates the whole process.
>>>>
>>>>If you hear a difference, great. I've thought I heard a difference doing
>>>>similar comparisons, then changed my perspective (nothing else - not
>>>>converters,
>>>>nothing - just reset my expectations, and switched back and forth) and
>
>>>>could
>>>>hear no difference.
>>>>
>>>>Just leave some room for other opinions when you post yours on this
>>>>subject
>>>>since it is very obvious that hearing is not as universally objective
>and
>>>>identically referenced as everyone might like to believe, and is highly
>>> visually
>>>>and environmentally affected. Some will hear differences in DAWs. There
>>>>are Cubase SX 3 users claiming Cubase 4 sounds different. Sigh. Then
>
>>>>they
>>>>realize they aren't even using the same project... or at least different
>>>>EQs, or etc, etc....
>>>>
>>>>Say what you want about published summing tests, but Lynn's tests are
>as
>>>>accurate as it gets, and that bears out in the results (all floating
point
>>>>DAWs cancel and sound identical - if you are hearing a difference, you
>are
>>>>hearing things that aren't there, or you forgot to align their gain and
>>> placement).
>>>> I've worked with Lynn at least briefly enough to know his attention
to
>>> detail.
>>>> In the same way people will disagree about PCs and Macs until neither
>
>>>> exists,
>>>>so will audio engineers disagree about DAWs. This is one debate that
>will
>>>>always exist as long as we have different ears, eyes, brains,... and
>>>>opinions.
>>>>
>>>>
>>>>What Neil has done is to prove that opinions are always going to differ
>>> (i.e.
>>>>no consensus on the "best" mix of the ones posted). And in truth everyone
>>>>has a different perception of sound in general - not everyone wants to
>
>>>>hear
>>>>things the same way, so we judge "best" from very different perspectives.
>>>> There is no single gold standard. There are variations and mutated
>>>> combinations,
>>>>but all are subjective. That in and of itself implies very distinctly
>
>>>>that
>>>>people can and will even perceive the exact same sound differently if
>
>>>>presented
>>>>with any variable that changes the brain's interpretation, even if just
>>> a
>>>>visual distraction. Just change the lights in the room and see if you
>
>>>>perceive
>>>>a song differently played back exactly the same way. Or have a cat run
>>> across
>>>>a desk while listening. Whether you care to admit it or not, it is there,
>>>>and that is actually the beauty of how our sense interact to create
>>>>perception.
>>>> That may be our undoing with DAW comparison tests, but it's also what
>
>>>> keeps
>>>>music fresh and creative, when we allow it to.
>>>>
>>>>So my suggestion is to use what makes you most creative, even if it's
>just
>>>>a "feeling" working with that DAW gives you - be it the workflow, the
>GUI,
>>>>or even the name brand reputation. But, as we all know, if you can't
>make
>>>>most any material sound good on whatever DAW you choose, the DAW isn't
>the
>>>>problem.
>>>>
>>>>Regards,
>>>>Dedric
>>>>
>>>>"Neil" <IUOIU@OIU.com> wrote:
>>>>>
>>>>>That's interesting - all those DAW sonic interpretations, I
>>>>>mean... I haven't had a chance to usee all of those, so it's
>>>>>good information.
>>>>>
>>>>>I still don't understand why you consider my summing
>>>>>comparisons "lame", however - it was a fair set of tests;
>>>>>the same mix summed in different ways. Not trying to prove a
>>>>>point or to rig it so one sounded any better than the other - in
>>>>>fact, if you recall the thread, different people liked different
>>>>>summed versions for different reasons... there wasn't any one
>>>>>version that stood out as being "the one" that everyone felt
>>>>>sounded better. The only reason I didn't come right out & say
>>>>>right away which version was which is so that I didn't bias
>>>>>anyone's opinion beforehand by mentioning that... NOT to try
>>>>>& "hide" anything or "trick" anyone, as you accused me of
>>>>>
>>>>>Sheesh!
>>>>>
>>>>>Neil
>>>>>
>>>>>
>>>>>"Lamont" <jjdpro@ameritech.net> wrote:
>>>>>>
>>>>>>Hey Neil,
>>>>>>
>>>>>>All I'm saying is: All DAW software have their own unique sound.
>>>>>>Despite
>>>>>>what those lame summing test shows..
>>>>>>
>>>>>>PT-HD has a very distinct sound. A very polished sound, with a nice
>top
>>>>>end,
>>>>>>but with full audio spectrum represented. Mixer/Summing buss can be
>
>>>>>>pushed,
>>>>>>but you have to watch it.
>>>>>>
>>>>>>Nuendo/SX: Has a very Clear, 2 dimension sound, that does not hype
the
>>>>top
>>>>>>nor bottom end.
>>>>>>
>>>>>>Logic Audio: Very Broad- Aggressive sound, that really works for Rock
>>> and
>>>>>>R & B/Gospel mixes.
>>>>>>
>>>>>>Digital Performer: With their hardware, superb audio quality. Full
>>>>>>bodied
>>>>>>sound .
>>>>>>
>>>>>>Sonar: Very flat sounding. I would say that Sonar is your most vanilla
>>>>sound
>>>>>>DW on the market..
>>>>>>
>>>>>>Samplitude : A little less top end than Pro Tools. Full bodied 3d
>>>>>>sound..
>>>>>>
>>>>>>Paris: Dark sounding in comparison to the the other DAWs. But, has
a
>3d
>>>>>sound
>>>>>>quality that's full bodied.
>>>>>>
>>>>>>I feel that you asking SX to be something it's not with some analog
>
>>>>>>summing.
>>>>>>Especialy for your genre of music..
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>"Neil" <IUOIU@OIU.com> wrote:
>>>>>>>
>>>>>>>"Lamont" <jjdpro@ameritech.net> wrote:
>>>>>>>>
>>>>>>>>"I'd disagree with you in this instance because I happen to think
>the
>>>>>Cubase
>>>>>>>>ones DO sound better."
>>>>>>>>
>>>>>>>>Then that SSL Engineer does not know what they are doing with board.
>>>>There's
>>>>>>>>no way a mix coming off of that board SSL should sound better than
>a
>>>>ITB
>>>>>>>>Cubase SX mix..
>>>>>>>>
>>>>>>>>Sorry, that just does not jive. That engineer does not know how to
>
>>>>>>>>push
>>>>>>>he
>>>>>>>>SSL or just not familiar with it.
>>>>>>>
>>>>>>>You're not really paying attention, are you? It was the same
>>>>>>>engineer (me). And as far as whether or not I know how to use
>>>>>>>that particular board, I guess that would be a matter of
>>>>>>>your opinion. I don't think the SSL mixes are bad ones, I think
>>>>>>>they came out good; I just think that you can hear more detail
>>>>>>>in the ITB mixes in the examples I gave, and they have more
>>>>>>>wideband frequency content from top to bottom.
>>>>>>>
>>>>>>>Anyway, my point of that particular comparison wasn't to say
>>>>>>>"ITB mixes are better than using a large-format console that
>>>>>>>costs somewhere in the six-figure range", the point of it was to
>>>>>>>address a signal-chain suggestion that Paul had... he had
>>>>>>>suggested perhaps that I needed to pick up a few pieces of
>>>>>>>killer vintage gear, and I was just demonstrating that I think
>>>>>>>the various signal chain components that I have here are on par
>>>>>>>with most anything that can be found in heavy-hitter studios...
>>>>>>>we used probably around $100k's worth of mics & pre's on the
>>>>>>>PTHD/SSL mixes, plus obviously you're looking at another
>>>>>>>roughly $100k for that particular console (40-channel E-series,
>>>>>>>black EQ's, w/G-series Computer & Total Recall package), add in
>>>>>>>the PTHD, outboard gear & whatnot, and you end up with
>>>>>>>somewhere around a quarter-mil's worth of equipment involved in
>>>>>>>that project. The project done at my place was done with my
>>>>>>>gear, which certainly doesn't tally up to anywhere remotely
>>>>>>>close to that cost & none of it bears a "vintage" stamp, but it
>>>>>>>sounds competitive with the project that used all the heavy-
>>>>>>>hitter stuff.
>>>>>>>
>>>>>>>Neil
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77292 is a reply to message #77288] |
Thu, 21 December 2006 09:13 |
Jamie K
Messages: 1115 Registered: July 2006
|
Senior Member |
|
|
Because digital audio is simply individual amplitude samples taken at
regular increments, if two tracks (sets of sample numbers) cancel out
completely they are, by definition, identical at every sample point.
Simple as that.
The music we hear is recreated from the sample point numbers when they
are converted back to analog. As we know, sound carried through the air
is an analog phenomenon.
The world of harmonic structure we hear comes from the combination of
waveforms at different frequencies used in the music. The ability of a
series of simple amplitude samples to accurately recreate such combined
frequency information is determined by the frequency of the regular
increments - the sample rate - how often an amplitude measurement is
recorded.
Nyquist suggests using at least 2x the highest frequency you want to
reproduce. But as long as the systems you are comparing are using the
same sample rate, that part of the equation is removed as a variable.
The quality of the A to D and D to A converters plays a part in what we
hear, but those stages can be removed as variables depending on what and
how you test - for example by comparing the same digital file bounced
digitally from different DAWs.
Cheers,
-Jamie
www.JamieKrutz.com
Neil wrote:
> "Dedric Terry" <dedric@echomg.com> wrote:
>> The tests I did were completely blank down to -200 dB (far below the last
>
>> bit). It's safe to say there is no difference, even in
>> quantization noise, which by technical rights, is considered below the level
>
>> of "cancellation" in such tests.
>
> I'm not necessarily talking about just the first bit or the
> last bit, but also everything in between... what happens on bit
> #12, for example? Everything on bit #12 should be audible, but
> in an a/b test what if thre are differences in what bits #8
> through #12 sound like, but the amplutide is stll the same on
> both files at that point, you'll get a null, right? Extrapolate
> that out somewhat & let's say there are differences in bits #8
> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
> etc through 43,972... Now this is breaking things down well
> beyond what I think can be measured, if I'm not mistaken (I
> dn't know of any way we could extract JUST that information
> from each file & play it back for an a/b test; but would not
> that be enough to have to "null-able" files that do actually
> sound somewhat different?
>
> I guess what I'm saying is that since each sample in a musical
> track or full song file doesn't represent a pure, simple set of
> content like a sample of a sine wave would - there's a whole
> world of harmonic structure in each sample of a song file, and
> I think (although I'll admit - I can't "prove") that there is
> plenty of room for some variables between the first bit & the
> last bit while still allowing for a null test to be successful.
>
> No? Am I wacked out of my mind?
>
> Neil
>
>
|
|
|
Re: Neil's Dilemma (was: looking for De-esser plugin) [message #77301 is a reply to message #77292] |
Thu, 21 December 2006 21:16 |
Dedric Terry
Messages: 788 Registered: June 2007
|
Senior Member |
|
|
Hi Neil,
Jamie is right. And you aren't wacked out - you are thinking this through
in a reasonable manner, but coming to the wrong
conclusion - easy to do given how confusing digital audio can be. Each word
represents an amplitude
point on a single curve that is changing over time, and can vary with a
speed up to the Nyquist frequency (as Jamie described).
The complex harmonic content we hear is actually the frequency modulation of
a single waveform,
that over a small amount of time creates the sound we translate - we don't
really hear a single sample at a time,
but thousands of samples at a time (1 sample alone could at most represent a
single positive or negative peak
of a 22,050Hz waveform).
If one bit doesn't cancel, esp. if it's a higher order bit than number 24,
you may hear, and will see that easily,
and the higher the bit in the dynamic range (higher order) the more audible
the difference.
Since each bit is 6dB of dynamic range, you can extrapolate how "loud" that
bit's impact will be
if there is a variation.
Now, obviously if we are talking about 1 sample in a 44.1k rate song, then
it simply be a
click (only audible if it's a high enough order bit) instead of an obvious
musical difference, but that should never
happen in a phase cancellation test between identical files higher than bit
24, unless there are clock sync problems,
driver issues, or the DAW is an early alpha version. :-)
By definition of what DAWs do during playback and record, every audio stream
has the same point in time (judged by the timeline)
played back sample accurately, one word at a time, at whatever sample rate
we are using. A phase cancellation test uses that
fact to compare two audio files word for word (and hence bit for bit since
each bit of a 24-bit word would
be at the same bit slot in each 24-bit word). Assuming they are aligned to
the same start point, sample
accurately, and both are the same set of sample words at each sample point,
bit for bit, and one is phase inverted,
they will cancel through all 24 bits. For two files to cancel completely
for the duration of the file, each and every bit in each word
must be the exact opposite of that same bit position in a word at the same
sample point. This is why zooming in on an FFT
of the full difference file is valuable as it can show any differences in
the lower order bits that wouldn't be audible. So even if
there is no audible difference, the visual followup will show if the two
files truly cancel even a levels below hearing, or
outside of a frequency change that we will perceive.
When they don't cancel, usually there will be way more than 1 bit
difference - it's usually one or more bits in the words for
thousands of samples. From a musical standpoint this is usually in a
frequency range (low freq, or high freq most often) - that will
show up as the difference between them, and that usually happens due to some
form of processing difference between the files,
such as EQ, compression, frequency dependant gain changes, etc. That is what
I believe you are thinking through, but when
talking about straight summing with no gain change (or known equal gain
changes), we are only looking at linear, one for one
comparisons between the two files' frequency representations.
Regards,
Dedric
> Neil wrote:
>> "Dedric Terry" <dedric@echomg.com> wrote:
>>> The tests I did were completely blank down to -200 dB (far below the
>>> last
>>
>>> bit). It's safe to say there is no difference, even in
>>> quantization noise, which by technical rights, is considered below the
>>> level
>>
>>> of "cancellation" in such tests.
>>
>> I'm not necessarily talking about just the first bit or the
>> last bit, but also everything in between... what happens on bit
>> #12, for example? Everything on bit #12 should be audible, but
>> in an a/b test what if thre are differences in what bits #8
>> through #12 sound like, but the amplutide is stll the same on
>> both files at that point, you'll get a null, right? Extrapolate
>> that out somewhat & let's say there are differences in bits #8
>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>> etc through 43,972... Now this is breaking things down well
>> beyond what I think can be measured, if I'm not mistaken (I
>> dn't know of any way we could extract JUST that information
>> from each file & play it back for an a/b test; but would not
>> that be enough to have to "null-able" files that do actually
>> sound somewhat different?
>>
>> I guess what I'm saying is that since each sample in a musical
>> track or full song file doesn't represent a pure, simple set of
>> content like a sample of a sine wave would - there's a whole
>> world of harmonic structure in each sample of a song file, and
>> I think (although I'll admit - I can't "prove") that there is
>> plenty of room for some variables between the first bit & the
>> last bit while still allowing for a null test to be successful.
>>
>> No? Am I wacked out of my mind?
>>
>> Neil
>>
|
|
|
Re: (No subject) [message #77303 is a reply to message #77301] |
Thu, 21 December 2006 23:05 |
Nil
Messages: 245 Registered: March 2007
|
Senior Member |
|
|
Dedric - first of all, great explanation - esp. your 2nd
paragraph. Next, let's take a look at something in the form of
the best "graph" I can do in this NG's format... let's assume
that each dot in the simple graph below is a sample point on a
segment of a waveform, and let's futher assume that each "I"
below represents four bits (I don't want to make it too
vertically large, for ease of reading) - so we're dealing with
a 16-bit wav file, with the 5th "dot" from the start point on
the left being a full-amplitude, zero-db-line 16 bit sample.
Now.... really, all I have to do to get a "null" is to have the
amplitude match at each "dot" on the waveform, yes? This, of
course, is a very simplistic graphic example, so bear with
me... but if I have each "dot" matching in amplitude &
therefore can get a null, what about the bits & content thereof
in between the extremes between the maxes & zero-line
crossings? Are you saying that there can be no variables in
sound between those sections that would still result in a null?
What about all the "I"'s that represent bits in between the
maxes & the minimums?
.
. I .
. I I I . What about the stuff in here?
. I I I I I . .....or in here????
.. I I I I I I I .
-------------------------------------
. I I I I I I I .
. I I I I I . Again, what about this region?
. I I I . ... or this region?
. I .
.
Neil
|
|
|
Re: (No subject) [message #77305 is a reply to message #77303] |
Fri, 22 December 2006 00:24 |
Dedric Terry
Messages: 788 Registered: June 2007
|
Senior Member |
|
|
Neil,
Actually what you are showing with the I's is the power of the waveform
(area under the curve),
not the bits. Only the curve itself is the actual amplitude of the wave.
The amplitude as represented in
16 bit words would be shown on the y-axis as 65535 steps, and only represent
the vertical dimension, or amplitude of a specific
point in time, not the area under it. The x-axis is of course time - one dot
per sample point,
each sample point is represented one 16-bit word, stored at that point in
time. Each word is used to define where
on the y-axis (how far from 0 volts amplitude) that dot appears - 0000 0000
0000 0000 of course would be the
0 amplitude point. An amplitude of 0 volts would equate to a dB power
of -infinity in the digital realm if
we have an infinite number of bits to subdivide the y axis with, but in
reality 0 is effectively -144dB for 24-bit audio
and -96dB for 16 bit.
Since we work with levels in dB in a DAW we start to think of waveforms as
being defined by that quantity, but
when the y axis is in dB, it's really telling us the power of that signal
around a point in time (area under the curve
over an average distance), not the amplitude.
There aren't additional points or content beneath the outline you drew.
It's a bit non-intuituve,
but think of audio as only existing as a function of a change in time and
the concept makes more sense.
Regardless of how complex the actual music is, there is still only a single
waveform that reaches our ears from a
single source - e.g a single amplitude point at a given point in time.
What we "hear" is how that waveform changes
up and down, over a certain time period.
Regards,
Dedric
"Neil" <IUOIU@OIU.com> wrote in message news:458b75af$1@linux...
>
> Dedric - first of all, great explanation - esp. your 2nd
> paragraph. Next, let's take a look at something in the form of
> the best "graph" I can do in this NG's format... let's assume
> that each dot in the simple graph below is a sample point on a
> segment of a waveform, and let's futher assume that each "I"
> below represents four bits (I don't want to make it too
> vertically large, for ease of reading) - so we're dealing with
> a 16-bit wav file, with the 5th "dot" from the start point on
> the left being a full-amplitude, zero-db-line 16 bit sample.
>
> Now.... really, all I have to do to get a "null" is to have the
> amplitude match at each "dot" on the waveform, yes? This, of
> course, is a very simplistic graphic example, so bear with
> me... but if I have each "dot" matching in amplitude &
> therefore can get a null, what about the bits & content thereof
> in between the extremes between the maxes & zero-line
> crossings? Are you saying that there can be no variables in
> sound between those sections that would still result in a null?
> What about all the "I"'s that represent bits in between the
> maxes & the minimums?
>
> .
> . I .
> . I I I . What about the stuff in here?
> . I I I I I . .....or in here????
> . I I I I I I I .
> -------------------------------------
> . I I I I I I I .
> . I I I I I . Again, what about this region?
> . I I I . ... or this region?
> . I .
> .
>
> Neil
|
|
|
Re: (No subject)...What's up inder the hood? [message #77309 is a reply to message #77301] |
Fri, 22 December 2006 07:16 |
LaMont
Messages: 828 Registered: October 2005
|
Senior Member |
|
|
Okay...
I guess what I'm saying is this:
-Is it possible that diferent DAW manufactuers "code" their app differently
for sound results.
I the answer is yes, then,the real task is to discover or rather un-cover
what's say: Motu's vision of summing, versus Digidesign, versus Steinberg
and so on..
What's under the hood. To me and others,when Digi re-coded their summing
engine, it was obvious that Pro Tools has an obvious top end (8k-10k) bump.
Where as Steinberg's summing is very neutral.
"Dedric Terry" <dedric@echomg.com> wrote:
>Hi Neil,
>
>Jamie is right. And you aren't wacked out - you are thinking this through
>in a reasonable manner, but coming to the wrong
>conclusion - easy to do given how confusing digital audio can be. Each
word
>represents an amplitude
>point on a single curve that is changing over time, and can vary with a
>speed up to the Nyquist frequency (as Jamie described).
>The complex harmonic content we hear is actually the frequency modulation
of
>a single waveform,
>that over a small amount of time creates the sound we translate - we don't
>really hear a single sample at a time,
>but thousands of samples at a time (1 sample alone could at most represent
a
>single positive or negative peak
>of a 22,050Hz waveform).
>
>If one bit doesn't cancel, esp. if it's a higher order bit than number 24,
>you may hear, and will see that easily,
>and the higher the bit in the dynamic range (higher order) the more audible
>the difference.
>Since each bit is 6dB of dynamic range, you can extrapolate how "loud" that
>bit's impact will be
>if there is a variation.
>
>Now, obviously if we are talking about 1 sample in a 44.1k rate song, then
>it simply be a
>click (only audible if it's a high enough order bit) instead of an obvious
>musical difference, but that should never
>happen in a phase cancellation test between identical files higher than
bit
>24, unless there are clock sync problems,
>driver issues, or the DAW is an early alpha version. :-)
>
>By definition of what DAWs do during playback and record, every audio stream
>has the same point in time (judged by the timeline)
>played back sample accurately, one word at a time, at whatever sample rate
>we are using. A phase cancellation test uses that
>fact to compare two audio files word for word (and hence bit for bit since
>each bit of a 24-bit word would
>be at the same bit slot in each 24-bit word). Assuming they are aligned
to
>the same start point, sample
>accurately, and both are the same set of sample words at each sample point,
>bit for bit, and one is phase inverted,
>they will cancel through all 24 bits. For two files to cancel completely
>for the duration of the file, each and every bit in each word
>must be the exact opposite of that same bit position in a word at the same
>sample point. This is why zooming in on an FFT
>of the full difference file is valuable as it can show any differences in
>the lower order bits that wouldn't be audible. So even if
>there is no audible difference, the visual followup will show if the two
>files truly cancel even a levels below hearing, or
>outside of a frequency change that we will perceive.
>
>When they don't cancel, usually there will be way more than 1 bit
>difference - it's usually one or more bits in the words for
>thousands of samples. From a musical standpoint this is usually in a
>frequency range (low freq, or high freq most often) - that will
>show up as the difference between them, and that usually happens due to
some
>form of processing difference between the files,
>such as EQ, compression, frequency dependant gain changes, etc. That is
what
>I believe you are thinking through, but when
>talking about straight summing with no gain change (or known equal gain
>changes), we are only looking at linear, one for one
>comparisons between the two files' frequency representations.
>
>Regards,
>Dedric
>
>> Neil wrote:
>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>> The tests I did were completely blank down to -200 dB (far below the
>>>> last
>>>
>>>> bit). It's safe to say there is no difference, even in
>>>> quantization noise, which by technical rights, is considered below the
>>>> level
>>>
>>>> of "cancellation" in such tests.
>>>
>>> I'm not necessarily talking about just the first bit or the
>>> last bit, but also everything in between... what happens on bit
>>> #12, for example? Everything on bit #12 should be audible, but
>>> in an a/b test what if thre are differences in what bits #8
>>> through #12 sound like, but the amplutide is stll the same on
>>> both files at that point, you'll get a null, right? Extrapolate
>>> that out somewhat & let's say there are differences in bits #8
>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>> etc through 43,972... Now this is breaking things down well
>>> beyond what I think can be measured, if I'm not mistaken (I
>>> dn't know of any way we could extract JUST that information
>>> from each file & play it back for an a/b test; but would not
>>> that be enough to have to "null-able" files that do actually
>>> sound somewhat different?
>>>
>>> I guess what I'm saying is that since each sample in a musical
>>> track or full song file doesn't represent a pure, simple set of
>>> content like a sample of a sine wave would - there's a whole
>>> world of harmonic structure in each sample of a song file, and
>>> I think (although I'll admit - I can't "prove") that there is
>>> plenty of room for some variables between the first bit & the
>>> last bit while still allowing for a null test to be successful.
>>>
>>> No? Am I wacked out of my mind?
>>>
>>> Neil
>>>
>
|
|
|
Re: (No subject) [message #77311 is a reply to message #77303] |
Fri, 22 December 2006 08:31 |
TCB
Messages: 1261 Registered: July 2007
|
Senior Member |
|
|
Neil,
You're using an analog waveform that is leading you to think incorrectly
about sampling. This is (very roughly) how it would look if you're working
with 16 bit samples.
0101010101010101
0101010111011101
0101110111010101
0101111111010100
0101110101111101
0111011101111101
0111110101110101
0100010111000100
0100011101010101
0001011100010101
0000010111111100
0001000001010111
0100000111110101
0111011101010000
0101011101000000
0101011111000101
0101010101010101
The easiest way to think of how the sampler works is that it looks at the
incoming voltage to the converter and asks 'Is this in the top or bottom
half of the possible amplitudes I can measure.' If it's in the top half it
writes a 1, if it's in the bottom half, it writes a zero. The next bit asks,
'Now that I know which half of my measurable voltage I'm looking at, is the
voltage in the top half of that half or the bottom half?' That's bit number
two. Then it's on to, 'Now that I know what quarter it's it, is it in the
top or bottom half of that quarter?' And so on sixteen time giving it a resolution
of 2 to the sixteenth power.
In other words, asking if the bits under the sample would sound is like asking
how the road would drive if it were 30 feet underground.
Now then, to get back to the original argument, people like me (and I think
Dedric but I'll let him speak for himself) get a little hacked off when someone
says, 'you have to use your ears' when it's possible using various computer
tools to check exactly how many of those samples match in two given files.
The nulling trick is just a very easy way to get a quick read on one aspect,
which is to answer the question 'do these two files match?' But there are
others and I've used them. And the sameness between properly written (by
which I mean lacking in serious bugs) audio applications is startling and
their differences so minor that other errors (analog cables, dust on the
speaker cone, humidity and temperature in the room) are far more likely to
cause a difference.
Personally I think this all stems from romanticism about music and the purity
of art. I have yet to hear someone tell me they need financial calculations
down to 25 decimal points. They need them done to (at most) five decimal
points because the smallest commonly used financial divisor is the basis
point, or one one hundredth of a penny. So internally you calculate to five
decimal places and round up or down from there and get on with your life.
As geeky as finance guys can get, nobody ever says, 'You know, Thad, that
last basis point just isn't really punchy enough for this deal. LBO guys
need really punchy returns, so can you run that calculation out a few more
bits to get a punchier basis point?' Scientists are also extremely careful
to keep 'false precision' out of their calculations, so if one instrument
will measure to four decimal points and the others will measure to 12 they
understand that everything the higher resolution instruments measure beyond
four accurate decimal points is worthless. They usually won't even record
the data to be sure they don't claim greater precision than they have, because
that's considered a horribly embarrassing junior high school mistake. But
musicians and audio engineers think that just because the data is sound data
somehow it enters a nebulous zone where that last one hundredth of a penny
can be punchier. Hey, if it gets you through the day, that's fine by me,
but there are things about digital audio that can be proven true or false
using the data. For things that can't be proven true or false with the data
itself there is ABY testing, which is a controlled way to use the most precise
audio measuring instruments available (our ears, at least until bats will
wear headphones) to see if things sound different. When it's not in the data,
and it's not in the ABY, I say it doesn't exist.
TCB
"Neil" <IUOIU@OIU.com> wrote:
>
>Dedric - first of all, great explanation - esp. your 2nd
>paragraph. Next, let's take a look at something in the form of
>the best "graph" I can do in this NG's format... let's assume
>that each dot in the simple graph below is a sample point on a
>segment of a waveform, and let's futher assume that each "I"
>below represents four bits (I don't want to make it too
>vertically large, for ease of reading) - so we're dealing with
>a 16-bit wav file, with the 5th "dot" from the start point on
>the left being a full-amplitude, zero-db-line 16 bit sample.
>
>Now.... really, all I have to do to get a "null" is to have the
>amplitude match at each "dot" on the waveform, yes? This, of
>course, is a very simplistic graphic example, so bear with
>me... but if I have each "dot" matching in amplitude &
>therefore can get a null, what about the bits & content thereof
>in between the extremes between the maxes & zero-line
>crossings? Are you saying that there can be no variables in
>sound between those sections that would still result in a null?
>What about all the "I"'s that represent bits in between the
>maxes & the minimums?
>
> .
> . I .
> . I I I . What about the stuff in here?
> . I I I I I . .....or in here????
>. I I I I I I I .
>-------------------------------------
> . I I I I I I I .
> . I I I I I . Again, what about this region?
> . I I I . ... or this region?
> . I .
> .
>
>Neil
|
|
|
Re: (No subject) [message #77312 is a reply to message #77311] |
Fri, 22 December 2006 08:46 |
LaMont
Messages: 828 Registered: October 2005
|
Senior Member |
|
|
Thad, I assume that you ar ereferring to me (using your ears).
Look, I think we are talking about two differnt things here:
1) Digital data
2) Software (DAWS) coding
You and Dedric have been concentrating on the laws of Digital audio. That's
fine. But, I'm talking about the Software that we use to decode our digital
audio.
Like my previous post states, are we saying that DAW software can't be written
for certain sonic results?
"TCB" <nobody@ishere.com> wrote:
>
>Neil,
>
>You're using an analog waveform that is leading you to think incorrectly
>about sampling. This is (very roughly) how it would look if you're working
>with 16 bit samples.
>
>0101010101010101
>0101010111011101
>0101110111010101
>0101111111010100
>0101110101111101
>0111011101111101
>0111110101110101
>0100010111000100
>0100011101010101
>0001011100010101
>0000010111111100
>0001000001010111
>0100000111110101
>0111011101010000
>0101011101000000
>0101011111000101
>0101010101010101
>
>The easiest way to think of how the sampler works is that it looks at the
>incoming voltage to the converter and asks 'Is this in the top or bottom
>half of the possible amplitudes I can measure.' If it's in the top half
it
>writes a 1, if it's in the bottom half, it writes a zero. The next bit asks,
>'Now that I know which half of my measurable voltage I'm looking at, is
the
>voltage in the top half of that half or the bottom half?' That's bit number
>two. Then it's on to, 'Now that I know what quarter it's it, is it in the
>top or bottom half of that quarter?' And so on sixteen time giving it a
resolution
>of 2 to the sixteenth power.
>
>In other words, asking if the bits under the sample would sound is like
asking
>how the road would drive if it were 30 feet underground.
>
>Now then, to get back to the original argument, people like me (and I think
>Dedric but I'll let him speak for himself) get a little hacked off when
someone
>says, 'you have to use your ears' when it's possible using various computer
>tools to check exactly how many of those samples match in two given files.
>The nulling trick is just a very easy way to get a quick read on one aspect,
>which is to answer the question 'do these two files match?' But there are
>others and I've used them. And the sameness between properly written (by
>which I mean lacking in serious bugs) audio applications is startling and
>their differences so minor that other errors (analog cables, dust on the
>speaker cone, humidity and temperature in the room) are far more likely
to
>cause a difference.
>
>Personally I think this all stems from romanticism about music and the purity
>of art. I have yet to hear someone tell me they need financial calculations
>down to 25 decimal points. They need them done to (at most) five decimal
>points because the smallest commonly used financial divisor is the basis
>point, or one one hundredth of a penny. So internally you calculate to five
>decimal places and round up or down from there and get on with your life.
>As geeky as finance guys can get, nobody ever says, 'You know, Thad, that
>last basis point just isn't really punchy enough for this deal. LBO guys
>need really punchy returns, so can you run that calculation out a few more
>bits to get a punchier basis point?' Scientists are also extremely careful
>to keep 'false precision' out of their calculations, so if one instrument
>will measure to four decimal points and the others will measure to 12 they
>understand that everything the higher resolution instruments measure beyond
>four accurate decimal points is worthless. They usually won't even record
>the data to be sure they don't claim greater precision than they have, because
>that's considered a horribly embarrassing junior high school mistake. But
>musicians and audio engineers think that just because the data is sound
data
>somehow it enters a nebulous zone where that last one hundredth of a penny
>can be punchier. Hey, if it gets you through the day, that's fine by me,
>but there are things about digital audio that can be proven true or false
>using the data. For things that can't be proven true or false with the data
>itself there is ABY testing, which is a controlled way to use the most precise
>audio measuring instruments available (our ears, at least until bats will
>wear headphones) to see if things sound different. When it's not in the
data,
>and it's not in the ABY, I say it doesn't exist.
>
>TCB
>
>"Neil" <IUOIU@OIU.com> wrote:
>>
>>Dedric - first of all, great explanation - esp. your 2nd
>>paragraph. Next, let's take a look at something in the form of
>>the best "graph" I can do in this NG's format... let's assume
>>that each dot in the simple graph below is a sample point on a
>>segment of a waveform, and let's futher assume that each "I"
>>below represents four bits (I don't want to make it too
>>vertically large, for ease of reading) - so we're dealing with
>>a 16-bit wav file, with the 5th "dot" from the start point on
>>the left being a full-amplitude, zero-db-line 16 bit sample.
>>
>>Now.... really, all I have to do to get a "null" is to have the
>>amplitude match at each "dot" on the waveform, yes? This, of
>>course, is a very simplistic graphic example, so bear with
>>me... but if I have each "dot" matching in amplitude &
>>therefore can get a null, what about the bits & content thereof
>>in between the extremes between the maxes & zero-line
>>crossings? Are you saying that there can be no variables in
>>sound between those sections that would still result in a null?
>>What about all the "I"'s that represent bits in between the
>>maxes & the minimums?
>>
>> .
>> . I .
>> . I I I . What about the stuff in here?
>> . I I I I I . .....or in here????
>>. I I I I I I I .
>>-------------------------------------
>> . I I I I I I I .
>> . I I I I I . Again, what about this region?
>> . I I I . ... or this region?
>> . I .
>> .
>>
>>Neil
>
|
|
|
Re: (No subject)...What's up inder the hood? [message #77313 is a reply to message #77309] |
Fri, 22 December 2006 08:08 |
Dedric Terry
Messages: 788 Registered: June 2007
|
Senior Member |
|
|
"LaMont" <jjdpro@ameritech.net> wrote in message news:458be8d5$1@linux...
>
> Okay...
> I guess what I'm saying is this:
>
> -Is it possible that diferent DAW manufactuers "code" their app
> differently
> for sound results.
Of course it is *possible* to do this, but only if the DAW has a specific
sound shaping purpose
beyond normal summing/mixing. Users talk about wanting developers to add a
"Neve sound" or "API sound" option to summing engines,
but that's really impractical given the amount of dsp required to make a
decent emulation (with convolution, dynamic EQ functions,
etc). For sake of not eating up all cpu processing, that could likely only
surface as is a built in EQ, which
no one wants universally in summing, and anyone can add at will already.
So it hasn't happened yet and isn't likely to as it detours from the basic
tenant of audio recording - recreate what comes in as
accurately as possible.
What Digi did in recoding their summing engine was try to recover some
of the damage done by the 24-bit buss in Mix systems. Motorola 56k dsps are
24-bit fixed point chips and I think
the new generation (321?) still is, but they use double words now for
48-bits). And though plugins could process at 48-bit by
doubling up and using upper and lower 24-bit words for 48-bit outputs, the
buss
between chips was 24-bits, so they had to dither to 24-bits after every
plugin. The mixer (if I recall correctly) also
had a 24-bit buss, so what Digi did is to add a dither stage to the mixer to
prevent this
constant truncation of data. 24-bits isn't enough to cover summing for more
than a few tracks without
losing information in the 16-bit world, and in the 24-bit world some
information will be lost, at least at the lowest levels.
Adding a dither stage (though I think they did more than that - perhaps
implement a 48-bit double word stage as well),
simply smoothed over the truncation that was happening, but it didn't solve
the problem, so with HD
they went to a double-word path - throughout I believe, including the path
between chips. I believe the chips
are still 24-bit, but by doubling up the processing (yes at a cost of twice
the overhead), they get a 48-bit engine.
This not only provided better headroom, but greater resolution. Higher bit
depths subdivide the amplitude with greater resolution, and that's
really where we get the definition of dynamic range - by lowering the signal
to quantization noise ratio.
With DAWs that use 32-bit floating point math all the way through, the only
reason for altering the summing
is by error, and that's an error that would actually be hard to make and get
past a very basic alpha stage of testing.
There is a small difference in fixed point math and floating point math, or
at least a theoretical difference in how it affects audio
in certain cases, but not necessarily in the result for calculating gain in
either for the same audio file. Where any differences might show up is
complicated, and I believe only appear at levels below 24-bit (or in
headroom with tracks pushed beyond 0dBFS), or when/if
there areany differences in where each amplitude level is quantized.
Obviously there can be differences if the DAW has to use varying bit depths
throughout a single summing path to accomodate hardware
as well as software summing, since there may be truncation or rounding along
the way, but that impacts the lowest bit
level, and hence - spacial reproduction, reverb tails perhaps, and "depth",
not the levels most music so the differences are most
often more subtle than not. But most modern DAWs have eliminated those
"rough edges" in the math by increasing the bit depth to accomodate normal
summing required for mixing audio.
So with Lynn's unity gain summing test (A files on the CD I believe), DAWs
were never asked to sum beyond 24-bits,
at least not on the upper end of the dynamic range, so everything that could
represent 24-bits accurately would cancel. The only ones
that didn't were ones that had a different bit depth and/or gain structure
whether hybrid or native
(e.g. Paris' subtracting 20dB from tracks and adding it to the buss). In
this case, PTHD cancelled (when I tested it) with
Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed vs.
32-bit float wasn't a factor.
When trying other tests, even when adding and subtracting gain, Nuendo,
Sequoia and Sonar cancel - both audibly and
visually at inaudible levels, which only proves that one isn't making an
error when calculating basic gain. Since a dB is well defined,
and the math to add gain is simple, they shouldn't. The fact that they all
use 32-bit float all the way through eliminates a difference
in data structure as well, and this just verifies that. There was a time
that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor went,
but it's 32-bit float all the way through now just as Sonar, Nuendo/Cubase,
Samplitude/Sequoia, DP, Audition (I presume at least).
I don't know what Acid or Live use. Saw promotes a fixed point engine, but
I don't know if it is still 24-bit, or now 48 bit.
That was an intentional choice by the developer, but he's the only one I
know of that stuck with 24-bit for summing
intentionally, esp. after the Digi Mix system mixer incident.
Long answer, but to sum up, it is certainly physically *possible* for a
developer to code something differently intentionally, but not
in reality likely since it would be breaking some basic fixed point or
floating point math rules. Where the differences really
showed up in the past is with PT Mix systems where the limitation was really
significant - e.g. 24 bit with truncation at several stages.
That really isn't such an issue anymore. Given the differences in workflow,
missing something in workflow or layout differences
is easy enough to do (e.g. Sonar doesn't have group and busses the way
Nuendo does, as it's outputs are actually driver outputs,
not software busses, so in Sonar, busses are actually outputs, and sub
busses are actually busses in Nuendo. There are no,
or at least I haven't found the equivalent of a Nuendo group in Sonar - that
affects the results of some tests (though not basic
summing) if not taken into account, but when taken into account, they work
exactly the same way).
So at least when talking about apps with 32-bit float all the way through,
it's safe to say (since it has been proven) that summing isn't different
unless
there is an error somewhere, or variation in how the user duplicates the
same mix in two different apps.
Imho, that's actually a very good thing - approaching a more consistent
basis for recording and mixing from which users can make all
of the decisions as to how the final product will sound and not be required
to decide when purchasing a pricey console, and have to
focus their business on clients who want "that sound". I believe we are
actually closer to the pure definition of recording now than
we once were.
Regards,
Dedric
>
> I the answer is yes, then,the real task is to discover or rather un-cover
> what's say: Motu's vision of summing, versus Digidesign, versus Steinberg
> and so on..
>
> What's under the hood. To me and others,when Digi re-coded their summing
> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
> bump.
> Where as Steinberg's summing is very neutral.
>
> "Dedric Terry" <dedric@echomg.com> wrote:
>>Hi Neil,
>>
>>Jamie is right. And you aren't wacked out - you are thinking this through
>
>>in a reasonable manner, but coming to the wrong
>>conclusion - easy to do given how confusing digital audio can be. Each
> word
>>represents an amplitude
>>point on a single curve that is changing over time, and can vary with a
>
>>speed up to the Nyquist frequency (as Jamie described).
>>The complex harmonic content we hear is actually the frequency modulation
> of
>>a single waveform,
>>that over a small amount of time creates the sound we translate - we don't
>
>>really hear a single sample at a time,
>>but thousands of samples at a time (1 sample alone could at most represent
> a
>>single positive or negative peak
>>of a 22,050Hz waveform).
>>
>>If one bit doesn't cancel, esp. if it's a higher order bit than number 24,
>
>>you may hear, and will see that easily,
>>and the higher the bit in the dynamic range (higher order) the more
>>audible
>
>>the difference.
>>Since each bit is 6dB of dynamic range, you can extrapolate how "loud"
>>that
>
>>bit's impact will be
>>if there is a variation.
>>
>>Now, obviously if we are talking about 1 sample in a 44.1k rate song, then
>
>>it simply be a
>>click (only audible if it's a high enough order bit) instead of an obvious
>
>>musical difference, but that should never
>>happen in a phase cancellation test between identical files higher than
> bit
>>24, unless there are clock sync problems,
>>driver issues, or the DAW is an early alpha version. :-)
>>
>>By definition of what DAWs do during playback and record, every audio
>>stream
>
>>has the same point in time (judged by the timeline)
>>played back sample accurately, one word at a time, at whatever sample
>>rate
>
>>we are using. A phase cancellation test uses that
>>fact to compare two audio files word for word (and hence bit for bit since
>
>>each bit of a 24-bit word would
>>be at the same bit slot in each 24-bit word). Assuming they are aligned
> to
>>the same start point, sample
>>accurately, and both are the same set of sample words at each sample
>>point,
>
>>bit for bit, and one is phase inverted,
>>they will cancel through all 24 bits. For two files to cancel completely
>
>>for the duration of the file, each and every bit in each word
>>must be the exact opposite of that same bit position in a word at the same
>
>>sample point. This is why zooming in on an FFT
>>of the full difference file is valuable as it can show any differences in
>
>>the lower order bits that wouldn't be audible. So even if
>>there is no audible difference, the visual followup will show if the two
>
>>files truly cancel even a levels below hearing, or
>>outside of a frequency change that we will perceive.
>>
>>When they don't cancel, usually there will be way more than 1 bit
>>difference - it's usually one or more bits in the words for
>>thousands of samples. From a musical standpoint this is usually in a
>>frequency range (low freq, or high freq most often) - that will
>>show up as the difference between them, and that usually happens due to
> some
>>form of processing difference between the files,
>>such as EQ, compression, frequency dependant gain changes, etc. That is
> what
>>I believe you are thinking through, but when
>>talking about straight summing with no gain change (or known equal gain
>
>>changes), we are only looking at linear, one for one
>>comparisons between the two files' frequency representations.
>>
>>Regards,
>>Dedric
>>
>>> Neil wrote:
>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>> The tests I did were completely blank down to -200 dB (far below the
>
>>>>> last
>>>>
>>>>> bit). It's safe to say there is no difference, even in
>>>>> quantization noise, which by technical rights, is considered below the
>
>>>>> level
>>>>
>>>>> of "cancellation" in such tests.
>>>>
>>>> I'm not necessarily talking about just the first bit or the
>>>> last bit, but also everything in between... what happens on bit
>>>> #12, for example? Everything on bit #12 should be audible, but
>>>> in an a/b test what if thre are differences in what bits #8
>>>> through #12 sound like, but the amplutide is stll the same on
>>>> both files at that point, you'll get a null, right? Extrapolate
>>>> that out somewhat & let's say there are differences in bits #8
>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>> etc through 43,972... Now this is breaking things down well
>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>> dn't know of any way we could extract JUST that information
>>>> from each file & play it back for an a/b test; but would not
>>>> that be enough to have to "null-able" files that do actually
>>>> sound somewhat different?
>>>>
>>>> I guess what I'm saying is that since each sample in a musical
>>>> track or full song file doesn't represent a pure, simple set of
>>>> content like a sample of a sine wave would - there's a whole
>>>> world of harmonic structure in each sample of a song file, and
>>>> I think (although I'll admit - I can't "prove") that there is
>>>> plenty of room for some variables between the first bit & the
>>>> last bit while still allowing for a null test to be successful.
>>>>
>>>> No? Am I wacked out of my mind?
>>>>
>>>> Neil
>>>>
>>
>
|
|
|
Re: (No subject) [message #77317 is a reply to message #77312] |
Fri, 22 December 2006 09:25 |
TCB
Messages: 1261 Registered: July 2007
|
Senior Member |
|
|
Actually, I wasn't referring specifically to you, I hear similar things all
the time all over the place. The first time I went through this on this forum
a couple of years ago was the Great CD Burning Speed Debate. In that one
Derek and I came up with about a gazillion ways to show that you could rip-burn-rip-burn
over and over again at all kinds of different speeds and wind up with exactly
the same data or audio CD. And I mean the same as in I slurped the whole
audio file *as a string* into perl and checked the samples. Having done that,
and thereby proven beyond a shadow of a doubt, I was told, roughly, that
clearly I couldn't hear well enough for these esoteric discussions. 'Use
your ears, dude.'
So, it is possible to write a DAW with a filter on the master bus? Yes, of
course it is. Why anyone would want to do such a thing is beyond me since
there are conventions that are pretty much constant throughout the digital
audio world about how signals should be mixed together. So if DAW X is a
little more present in the second to the top octave (I think you mentioned
one being so) I would call that either a bug or mistaken perception. If I
could do the export file, flip polarity trick and the files didn't null I'd
say, 'Interesting, let's be sure my test is good. Is there an EQ on a track
in one mix and not the other? Is there a group track that is doubling the
guitars in one mix and not the other?' If, on the other hand, the tracks
did null I'd say, 'Hmmmmmm, maybe I'm hearing a difference where there isn't
one.'
Lastly, and this is just a quirk for me, I find it odd that musicians and
audio engineers are so disinterested in taking seriously expert opinion.
This is rampant in the audiophile world where off the record the engineers
themselves will tell you they're not sure the $3k speaker cables they used
to hook up their new speaker line makes any difference. But with Neil, for
example, his mixes are 20 times better than mine for that kind of music.
If he gave me advice and opinion I would take it very seriously. But for
some reason when people like me and Dedric, who have developed extensive
knowledge into how computers work, are very often brushed off very quickly.
Dedric isn't even a jerk about, while I'm a jerk about it only sometimes,
so I find that reaction to be, well, odd. But like I said, whatever gets
ya through the day, I'm not looking for converts and nobody is paying me
to post here.
TCB
"LaMont" <jjdpro@ameritech.net> wrote:
>
>Thad, I assume that you ar ereferring to me (using your ears).
>
>Look, I think we are talking about two differnt things here:
>
>1) Digital data
>
>2) Software (DAWS) coding
>
>You and Dedric have been concentrating on the laws of Digital audio. That's
>fine. But, I'm talking about the Software that we use to decode our digital
>audio.
>
>Like my previous post states, are we saying that DAW software can't be written
>for certain sonic results?
>
>
>"TCB" <nobody@ishere.com> wrote:
>>
>>Neil,
>>
>>You're using an analog waveform that is leading you to think incorrectly
>>about sampling. This is (very roughly) how it would look if you're working
>>with 16 bit samples.
>>
>>0101010101010101
>>0101010111011101
>>0101110111010101
>>0101111111010100
>>0101110101111101
>>0111011101111101
>>0111110101110101
>>0100010111000100
>>0100011101010101
>>0001011100010101
>>0000010111111100
>>0001000001010111
>>0100000111110101
>>0111011101010000
>>0101011101000000
>>0101011111000101
>>0101010101010101
>>
>>The easiest way to think of how the sampler works is that it looks at the
>>incoming voltage to the converter and asks 'Is this in the top or bottom
>>half of the possible amplitudes I can measure.' If it's in the top half
>it
>>writes a 1, if it's in the bottom half, it writes a zero. The next bit
asks,
>>'Now that I know which half of my measurable voltage I'm looking at, is
>the
>>voltage in the top half of that half or the bottom half?' That's bit number
>>two. Then it's on to, 'Now that I know what quarter it's it, is it in the
>>top or bottom half of that quarter?' And so on sixteen time giving it a
>resolution
>>of 2 to the sixteenth power.
>>
>>In other words, asking if the bits under the sample would sound is like
>asking
>>how the road would drive if it were 30 feet underground.
>>
>>Now then, to get back to the original argument, people like me (and I think
>>Dedric but I'll let him speak for himself) get a little hacked off when
>someone
>>says, 'you have to use your ears' when it's possible using various computer
>>tools to check exactly how many of those samples match in two given files.
>>The nulling trick is just a very easy way to get a quick read on one aspect,
>>which is to answer the question 'do these two files match?' But there are
>>others and I've used them. And the sameness between properly written (by
>>which I mean lacking in serious bugs) audio applications is startling and
>>their differences so minor that other errors (analog cables, dust on the
>>speaker cone, humidity and temperature in the room) are far more likely
>to
>>cause a difference.
>>
>>Personally I think this all stems from romanticism about music and the
purity
>>of art. I have yet to hear someone tell me they need financial calculations
>>down to 25 decimal points. They need them done to (at most) five decimal
>>points because the smallest commonly used financial divisor is the basis
>>point, or one one hundredth of a penny. So internally you calculate to
five
>>decimal places and round up or down from there and get on with your life.
>>As geeky as finance guys can get, nobody ever says, 'You know, Thad, that
>>last basis point just isn't really punchy enough for this deal. LBO guys
>>need really punchy returns, so can you run that calculation out a few more
>>bits to get a punchier basis point?' Scientists are also extremely careful
>>to keep 'false precision' out of their calculations, so if one instrument
>>will measure to four decimal points and the others will measure to 12 they
>>understand that everything the higher resolution instruments measure beyond
>>four accurate decimal points is worthless. They usually won't even record
>>the data to be sure they don't claim greater precision than they have,
because
>>that's considered a horribly embarrassing junior high school mistake. But
>>musicians and audio engineers think that just because the data is sound
>data
>>somehow it enters a nebulous zone where that last one hundredth of a penny
>>can be punchier. Hey, if it gets you through the day, that's fine by me,
>>but there are things about digital audio that can be proven true or false
>>using the data. For things that can't be proven true or false with the
data
>>itself there is ABY testing, which is a controlled way to use the most
precise
>>audio measuring instruments available (our ears, at least until bats will
>>wear headphones) to see if things sound different. When it's not in the
>data,
>>and it's not in the ABY, I say it doesn't exist.
>>
>>TCB
>>
>>"Neil" <IUOIU@OIU.com> wrote:
>>>
>>>Dedric - first of all, great explanation - esp. your 2nd
>>>paragraph. Next, let's take a look at something in the form of
>>>the best "graph" I can do in this NG's format... let's assume
>>>that each dot in the simple graph below is a sample point on a
>>>segment of a waveform, and let's futher assume that each "I"
>>>below represents four bits (I don't want to make it too
>>>vertically large, for ease of reading) - so we're dealing with
>>>a 16-bit wav file, with the 5th "dot" from the start point on
>>>the left being a full-amplitude, zero-db-line 16 bit sample.
>>>
>>>Now.... really, all I have to do to get a "null" is to have the
>>>amplitude match at each "dot" on the waveform, yes? This, of
>>>course, is a very simplistic graphic example, so bear with
>>>me... but if I have each "dot" matching in amplitude &
>>>therefore can get a null, what about the bits & content thereof
>>>in between the extremes between the maxes & zero-line
>>>crossings? Are you saying that there can be no variables in
>>>sound between those sections that would still result in a null?
>>>What about all the "I"'s that represent bits in between the
>>>maxes & the minimums?
>>>
>>> .
>>> . I .
>>> . I I I . What about the stuff in here?
>>> . I I I I I . .....or in here????
>>>. I I I I I I I .
>>>-------------------------------------
>>> . I I I I I I I .
>>> . I I I I I . Again, what about this region?
>>> . I I I . ... or this region?
>>> . I .
>>> .
>>>
>>>Neil
>>
>
|
|
|
Re: (No subject) [message #77322 is a reply to message #77317] |
Fri, 22 December 2006 10:03 |
LaMont
Messages: 828 Registered: October 2005
|
Senior Member |
|
|
Thad,
I think your points are valid. However I think the reason most Recording
engineers don't like to talk Audio science, is because of the "unexplainable"
anomolies that occurr with sound. Matters not if it's digital or analog,
but rather how does it sound..
We need factions like AES who discuss such theorectical and new ideas and
advancements in audio reproduction. However, once the science down, then
comes the art of it all.. Music is still the reason for whatwe are discussing.
Music is emotional, yet is a science as well.
For your camp to continue to de-value the human side (use your ears) of the
equation is not right as well.
I think both sides are right, but the science campers cannot speak to a guy
who's main tool is his "ears" and not a scope.
"TCB" <nobody@ishere.com> wrote:
>
>Actually, I wasn't referring specifically to you, I hear similar things
all
>the time all over the place. The first time I went through this on this
forum
>a couple of years ago was the Great CD Burning Speed Debate. In that one
>Derek and I came up with about a gazillion ways to show that you could rip-burn-rip-burn
>over and over again at all kinds of different speeds and wind up with exactly
>the same data or audio CD. And I mean the same as in I slurped the whole
>audio file *as a string* into perl and checked the samples. Having done
that,
>and thereby proven beyond a shadow of a doubt, I was told, roughly, that
>clearly I couldn't hear well enough for these esoteric discussions. 'Use
>your ears, dude.'
>
>So, it is possible to write a DAW with a filter on the master bus? Yes,
of
>course it is. Why anyone would want to do such a thing is beyond me since
>there are conventions that are pretty much constant throughout the digital
>audio world about how signals should be mixed together. So if DAW X is a
>little more present in the second to the top octave (I think you mentioned
>one being so) I would call that either a bug or mistaken perception. If
I
>could do the export file, flip polarity trick and the files didn't null
I'd
>say, 'Interesting, let's be sure my test is good. Is there an EQ on a track
>in one mix and not the other? Is there a group track that is doubling the
>guitars in one mix and not the other?' If, on the other hand, the tracks
>did null I'd say, 'Hmmmmmm, maybe I'm hearing a difference where there isn't
>one.'
>
>Lastly, and this is just a quirk for me, I find it odd that musicians and
>audio engineers are so disinterested in taking seriously expert opinion.
>This is rampant in the audiophile world where off the record the engineers
>themselves will tell you they're not sure the $3k speaker cables they used
>to hook up their new speaker line makes any difference. But with Neil, for
>example, his mixes are 20 times better than mine for that kind of music.
>If he gave me advice and opinion I would take it very seriously. But for
>some reason when people like me and Dedric, who have developed extensive
>knowledge into how computers work, are very often brushed off very quickly.
>Dedric isn't even a jerk about, while I'm a jerk about it only sometimes,
>so I find that reaction to be, well, odd. But like I said, whatever gets
>ya through the day, I'm not looking for converts and nobody is paying me
>to post here.
>
>TCB
>
>"LaMont" <jjdpro@ameritech.net> wrote:
>>
>>Thad, I assume that you ar ereferring to me (using your ears).
>>
>>Look, I think we are talking about two differnt things here:
>>
>>1) Digital data
>>
>>2) Software (DAWS) coding
>>
>>You and Dedric have been concentrating on the laws of Digital audio. That's
>>fine. But, I'm talking about the Software that we use to decode our digital
>>audio.
>>
>>Like my previous post states, are we saying that DAW software can't be
written
>>for certain sonic results?
>>
>>
>>"TCB" <nobody@ishere.com> wrote:
>>>
>>>Neil,
>>>
>>>You're using an analog waveform that is leading you to think incorrectly
>>>about sampling. This is (very roughly) how it would look if you're working
>>>with 16 bit samples.
>>>
>>>0101010101010101
>>>0101010111011101
>>>0101110111010101
>>>0101111111010100
>>>0101110101111101
>>>0111011101111101
>>>0111110101110101
>>>0100010111000100
>>>0100011101010101
>>>0001011100010101
>>>0000010111111100
>>>0001000001010111
>>>0100000111110101
>>>0111011101010000
>>>0101011101000000
>>>0101011111000101
>>>0101010101010101
>>>
>>>The easiest way to think of how the sampler works is that it looks at
the
>>>incoming voltage to the converter and asks 'Is this in the top or bottom
>>>half of the possible amplitudes I can measure.' If it's in the top half
>>it
>>>writes a 1, if it's in the bottom half, it writes a zero. The next bit
>asks,
>>>'Now that I know which half of my measurable voltage I'm looking at, is
>>the
>>>voltage in the top half of that half or the bottom half?' That's bit number
>>>two. Then it's on to, 'Now that I know what quarter it's it, is it in
the
>>>top or bottom half of that quarter?' And so on sixteen time giving it
a
>>resolution
>>>of 2 to the sixteenth power.
>>>
>>>In other words, asking if the bits under the sample would sound is like
>>asking
>>>how the road would drive if it were 30 feet underground.
>>>
>>>Now then, to get back to the original argument, people like me (and I
think
>>>Dedric but I'll let him speak for himself) get a little hacked off when
>>someone
>>>says, 'you have to use your ears' when it's possible using various computer
>>>tools to check exactly how many of those samples match in two given files.
>>>The nulling trick is just a very easy way to get a quick read on one aspect,
>>>which is to answer the question 'do these two files match?' But there
are
>>>others and I've used them. And the sameness between properly written (by
>>>which I mean lacking in serious bugs) audio applications is startling
and
>>>their differences so minor that other errors (analog cables, dust on the
>>>speaker cone, humidity and temperature in the room) are far more likely
>>to
>>>cause a difference.
>>>
>>>Personally I think this all stems from romanticism about music and the
>purity
>>>of art. I have yet to hear someone tell me they need financial calculations
>>>down to 25 decimal points. They need them done to (at most) five decimal
>>>points because the smallest commonly used financial divisor is the basis
>>>point, or one one hundredth of a penny. So internally you calculate to
>five
>>>decimal places and round up or down from there and get on with your life.
>>>As geeky as finance guys can get, nobody ever says, 'You know, Thad, that
>>>last basis point just isn't really punchy enough for this deal. LBO guys
>>>need really punchy returns, so can you run that calculation out a few
more
>>>bits to get a punchier basis point?' Scientists are also extremely careful
>>>to keep 'false precision' out of their calculations, so if one instrument
>>>will measure to four decimal points and the others will measure to 12
they
>>>understand that everything the higher resolution instruments measure beyond
>>>four accurate decimal points is worthless. They usually won't even record
>>>the data to be sure they don't claim greater precision than they have,
>because
>>>that's considered a horribly embarrassing junior high school mistake.
But
>>>musicians and audio engineers think that just because the data is sound
>>data
>>>somehow it enters a nebulous zone where that last one hundredth of a penny
>>>can be punchier. Hey, if it gets you through the day, that's fine by me,
>>>but there are things about digital audio that can be proven true or false
>>>using the data. For things that can't be proven true or false with the
>data
>>>itself there is ABY testing, which is a controlled way to use the most
>precise
>>>audio measuring instruments available (our ears, at least until bats will
>>>wear headphones) to see if things sound different. When it's not in the
>>data,
>>>and it's not in the ABY, I say it doesn't exist.
>>>
>>>TCB
>>>
>>>"Neil" <IUOIU@OIU.com> wrote:
>>>>
>>>>Dedric - first of all, great explanation - esp. your 2nd
>>>>paragraph. Next, let's take a look at something in the form of
>>>>the best "graph" I can do in this NG's format... let's assume
>>>>that each dot in the simple graph below is a sample point on a
>>>>segment of a waveform, and let's futher assume that each "I"
>>>>below represents four bits (I don't want to make it too
>>>>vertically large, for ease of reading) - so we're dealing with
>>>>a 16-bit wav file, with the 5th "dot" from the start point on
>>>>the left being a full-amplitude, zero-db-line 16 bit sample.
>>>>
>>>>Now.... really, all I have to do to get a "null" is to have the
>>>>amplitude match at each "dot" on the waveform, yes? This, of
>>>>course, is a very simplistic graphic example, so bear with
>>>>me... but if I have each "dot" matching in amplitude &
>>>>therefore can get a null, what about the bits & content thereof
>>>>in between the extremes between the maxes & zero-line
>>>>crossings? Are you saying that there can be no variables in
>>>>sound between those sections that would still result in a null?
>>>>What about all the "I"'s that represent bits in between the
>>>>maxes & the minimums?
>>>>
>>>> .
>>>> . I .
>>>> . I I I . What about the stuff in here?
>>>> . I I I I I . .....or in here????
>>>>. I I I I I I I .
>>>>-------------------------------------
>>>> . I I I I I I I .
>>>> . I I I I I . Again, what about this region?
>>>> . I I I . ... or this region?
>>>> . I .
>>>> .
>>>>
>>>>Neil
>>>
>>
>
|
|
|
Re: (No subject) [message #77323 is a reply to message #77317] |
Fri, 22 December 2006 10:21 |
Neil
Messages: 1645 Registered: April 2006
|
Senior Member |
|
|
"TCB" <nobody@ishere.com> wrote:
>Lastly, and this is just a quirk for me, I find it odd that musicians and
>audio engineers are so disinterested in taking seriously expert opinion.
>This is rampant in the audiophile world where off the record the engineers
>themselves will tell you they're not sure the $3k speaker cables they used
>to hook up their new speaker line makes any difference. But with Neil, for
>example, his mixes are 20 times better than mine for that kind of music.
>If he gave me advice and opinion I would take it very seriously. But for
>some reason when people like me and Dedric, who have developed extensive
>knowledge into how computers work, are very often brushed off very quickly.
>Dedric isn't even a jerk about, while I'm a jerk about it only sometimes,
>so I find that reaction to be, well, odd. But like I said, whatever gets
>ya through the day, I'm not looking for converts and nobody is paying me
>to post here.
First of all, thanks for the compliment, and secondly, I hope I
haven't come across as one of those who brushes off the facts
stated by those more knowledgeable than I with regard to the
technical aspects of all this stuff - I guess I'm part of the
"Use your ears, but also pay attention to the data" crowd.
I'm the first one to acknowledge that I don't understand some of
of the more technical elements of the digiworld, and it appears
that my interpretations of how certain things work in the
digital realm are/were flawed... so, despite the fact that
you're not paid to post here, I thank you & Dedric & others for
getting into the detail you have... some of it's over my head,
admittedly, but you've explained it clearly enough to where
I "get it" a little bit better.
Neil
|
|
|
Re: (No subject)...What's up inder the hood? [message #77324 is a reply to message #77313] |
Fri, 22 December 2006 10:24 |
LaMont
Messages: 828 Registered: October 2005
|
Senior Member |
|
|
Dedric good post..
However, I have PT-M-Powered/M-audio 410 interface for my laptop and it has
that same sound (no eq, zero fader) that HD does. I know their use the same
48 bit fix mixer. I load up the same file in Nuendo (no eq, zero fader)..results.
different sonic character.
PT having a top end touch..Nuendo, nice smooth(flat) sound. And I'm just
taking about a stereo wav file nulled with no eq..nothing ..zilch..nada..
Now, there are devices (keyboards, dum machines) on the market today that
have a Master Buss Compressor and EQ set to on with the top end notched up.
Why? because it gives their product an competitive advantageover the competition..
Ex: Yahama's Motif ES, Akai's MPC 1000, 2500, Roland's Fantom.
So, why would'nt a DAW manufactuer code in an extra (ooommf) to make their
DAW sound better. Especially, given the "I hate Digtal Summing" crowd? And,
If I'm a DAW manufactuer, what would give my product a sonic edge over the
competition?
We live in the "louder is better" audio world these days, so a DAW that can
catch my attention 'sonically" will probaly will get the sell. That's what
happend to me back in 1997 when I heard Paris. I was floored!!! Still to
this day, nothing has floored me like that "Road House Blues Demo" I heard
on Paris.
Was it the hardware ? was it the software. I remember talking with Edmund
at the 2000 winter Namm, and told me that he & Steve set out to reproduce
the sonics of big buck analog board (eq's) and all.. And, summing was a big
big issue for them because they (ID) thought that nobody has gotten it(summing)
right. And by right, they meant, behaved like a console with a wide lane
for all of those tracks..
"Dedric Terry" <dedric@echomg.com> wrote:
>"LaMont" <jjdpro@ameritech.net> wrote in message news:458be8d5$1@linux...
>>
>> Okay...
>> I guess what I'm saying is this:
>>
>> -Is it possible that diferent DAW manufactuers "code" their app
>> differently
>> for sound results.
>
>Of course it is *possible* to do this, but only if the DAW has a specific
>sound shaping purpose
>beyond normal summing/mixing. Users talk about wanting developers to add
a
>"Neve sound" or "API sound" option to summing engines,
>but that's really impractical given the amount of dsp required to make a
>decent emulation (with convolution, dynamic EQ functions,
>etc). For sake of not eating up all cpu processing, that could likely only
>surface as is a built in EQ, which
>no one wants universally in summing, and anyone can add at will already.
>
>So it hasn't happened yet and isn't likely to as it detours from the basic
>tenant of audio recording - recreate what comes in as
>accurately as possible.
>
>What Digi did in recoding their summing engine was try to recover some
>of the damage done by the 24-bit buss in Mix systems. Motorola 56k dsps
are
>24-bit fixed point chips and I think
>the new generation (321?) still is, but they use double words now for
>48-bits). And though plugins could process at 48-bit by
>doubling up and using upper and lower 24-bit words for 48-bit outputs, the
>buss
>between chips was 24-bits, so they had to dither to 24-bits after every
>plugin. The mixer (if I recall correctly) also
>had a 24-bit buss, so what Digi did is to add a dither stage to the mixer
to
>prevent this
>constant truncation of data. 24-bits isn't enough to cover summing for
more
>than a few tracks without
>losing information in the 16-bit world, and in the 24-bit world some
>information will be lost, at least at the lowest levels.
>
>Adding a dither stage (though I think they did more than that - perhaps
>implement a 48-bit double word stage as well),
>simply smoothed over the truncation that was happening, but it didn't solve
>the problem, so with HD
>they went to a double-word path - throughout I believe, including the path
>between chips. I believe the chips
>are still 24-bit, but by doubling up the processing (yes at a cost of twice
>the overhead), they get a 48-bit engine.
>This not only provided better headroom, but greater resolution. Higher
bit
>depths subdivide the amplitude with greater resolution, and that's
>really where we get the definition of dynamic range - by lowering the signal
>to quantization noise ratio.
>
>With DAWs that use 32-bit floating point math all the way through, the only
>reason for altering the summing
>is by error, and that's an error that would actually be hard to make and
get
>past a very basic alpha stage of testing.
>There is a small difference in fixed point math and floating point math,
or
>at least a theoretical difference in how it affects audio
>in certain cases, but not necessarily in the result for calculating gain
in
>either for the same audio file. Where any differences might show up is
>complicated, and I believe only appear at levels below 24-bit (or in
>headroom with tracks pushed beyond 0dBFS), or when/if
>there areany differences in where each amplitude level is quantized.
>
>Obviously there can be differences if the DAW has to use varying bit depths
>throughout a single summing path to accomodate hardware
>as well as software summing, since there may be truncation or rounding along
>the way, but that impacts the lowest bit
>level, and hence - spacial reproduction, reverb tails perhaps, and "depth",
>not the levels most music so the differences are most
>often more subtle than not. But most modern DAWs have eliminated those
>"rough edges" in the math by increasing the bit depth to accomodate normal
>summing required for mixing audio.
>
>So with Lynn's unity gain summing test (A files on the CD I believe), DAWs
>were never asked to sum beyond 24-bits,
>at least not on the upper end of the dynamic range, so everything that could
>represent 24-bits accurately would cancel. The only ones
>that didn't were ones that had a different bit depth and/or gain structure
>whether hybrid or native
>(e.g. Paris' subtracting 20dB from tracks and adding it to the buss). In
>this case, PTHD cancelled (when I tested it) with
>Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed vs.
>32-bit float wasn't a factor.
>
>When trying other tests, even when adding and subtracting gain, Nuendo,
>Sequoia and Sonar cancel - both audibly and
>visually at inaudible levels, which only proves that one isn't making an
>error when calculating basic gain. Since a dB is well defined,
>and the math to add gain is simple, they shouldn't. The fact that they
all
>use 32-bit float all the way through eliminates a difference
>in data structure as well, and this just verifies that. There was a time
>that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor went,
>but it's 32-bit float all the way through now just as Sonar, Nuendo/Cubase,
>Samplitude/Sequoia, DP, Audition (I presume at least).
>I don't know what Acid or Live use. Saw promotes a fixed point engine,
but
>I don't know if it is still 24-bit, or now 48 bit.
>That was an intentional choice by the developer, but he's the only one I
>know of that stuck with 24-bit for summing
>intentionally, esp. after the Digi Mix system mixer incident.
>
>Long answer, but to sum up, it is certainly physically *possible* for a
>developer to code something differently intentionally, but not
>in reality likely since it would be breaking some basic fixed point or
>floating point math rules. Where the differences really
>showed up in the past is with PT Mix systems where the limitation was really
>significant - e.g. 24 bit with truncation at several stages.
>
>That really isn't such an issue anymore. Given the differences in workflow,
>missing something in workflow or layout differences
>is easy enough to do (e.g. Sonar doesn't have group and busses the way
>Nuendo does, as it's outputs are actually driver outputs,
>not software busses, so in Sonar, busses are actually outputs, and sub
>busses are actually busses in Nuendo. There are no,
>or at least I haven't found the equivalent of a Nuendo group in Sonar -
that
>affects the results of some tests (though not basic
>summing) if not taken into account, but when taken into account, they work
>exactly the same way).
>
>So at least when talking about apps with 32-bit float all the way through,
>it's safe to say (since it has been proven) that summing isn't different
>unless
>there is an error somewhere, or variation in how the user duplicates the
>same mix in two different apps.
>
>Imho, that's actually a very good thing - approaching a more consistent
>basis for recording and mixing from which users can make all
>of the decisions as to how the final product will sound and not be required
>to decide when purchasing a pricey console, and have to
>focus their business on clients who want "that sound". I believe we are
>actually closer to the pure definition of recording now than
>we once were.
>
>Regards,
>Dedric
>
>
>>
>> I the answer is yes, then,the real task is to discover or rather un-cover
>> what's say: Motu's vision of summing, versus Digidesign, versus Steinberg
>> and so on..
>>
>> What's under the hood. To me and others,when Digi re-coded their summing
>> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
>> bump.
>> Where as Steinberg's summing is very neutral.
>>
>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>Hi Neil,
>>>
>>>Jamie is right. And you aren't wacked out - you are thinking this through
>>
>>>in a reasonable manner, but coming to the wrong
>>>conclusion - easy to do given how confusing digital audio can be. Each
>> word
>>>represents an amplitude
>>>point on a single curve that is changing over time, and can vary with
a
>>
>>>speed up to the Nyquist frequency (as Jamie described).
>>>The complex harmonic content we hear is actually the frequency modulation
>> of
>>>a single waveform,
>>>that over a small amount of time creates the sound we translate - we don't
>>
>>>really hear a single sample at a time,
>>>but thousands of samples at a time (1 sample alone could at most represent
>> a
>>>single positive or negative peak
>>>of a 22,050Hz waveform).
>>>
>>>If one bit doesn't cancel, esp. if it's a higher order bit than number
24,
>>
>>>you may hear, and will see that easily,
>>>and the higher the bit in the dynamic range (higher order) the more
>>>audible
>>
>>>the difference.
>>>Since each bit is 6dB of dynamic range, you can extrapolate how "loud"
>>>that
>>
>>>bit's impact will be
>>>if there is a variation.
>>>
>>>Now, obviously if we are talking about 1 sample in a 44.1k rate song,
then
>>
>>>it simply be a
>>>click (only audible if it's a high enough order bit) instead of an obvious
>>
>>>musical difference, but that should never
>>>happen in a phase cancellation test between identical files higher than
>> bit
>>>24, unless there are clock sync problems,
>>>driver issues, or the DAW is an early alpha version. :-)
>>>
>>>By definition of what DAWs do during playback and record, every audio
>>>stream
>>
>>>has the same point in time (judged by the timeline)
>>>played back sample accurately, one word at a time, at whatever sample
>>>rate
>>
>>>we are using. A phase cancellation test uses that
>>>fact to compare two audio files word for word (and hence bit for bit since
>>
>>>each bit of a 24-bit word would
>>>be at the same bit slot in each 24-bit word). Assuming they are aligned
>> to
>>>the same start point, sample
>>>accurately, and both are the same set of sample words at each sample
>>>point,
>>
>>>bit for bit, and one is phase inverted,
>>>they will cancel through all 24 bits. For two files to cancel completely
>>
>>>for the duration of the file, each and every bit in each word
>>>must be the exact opposite of that same bit position in a word at the
same
>>
>>>sample point. This is why zooming in on an FFT
>>>of the full difference file is valuable as it can show any differences
in
>>
>>>the lower order bits that wouldn't be audible. So even if
>>>there is no audible difference, the visual followup will show if the two
>>
>>>files truly cancel even a levels below hearing, or
>>>outside of a frequency change that we will perceive.
>>>
>>>When they don't cancel, usually there will be way more than 1 bit
>>>difference - it's usually one or more bits in the words for
>>>thousands of samples. From a musical standpoint this is usually in a
>>>frequency range (low freq, or high freq most often) - that will
>>>show up as the difference between them, and that usually happens due to
>> some
>>>form of processing difference between the files,
>>>such as EQ, compression, frequency dependant gain changes, etc. That is
>> what
>>>I believe you are thinking through, but when
>>>talking about straight summing with no gain change (or known equal gain
>>
>>>changes), we are only looking at linear, one for one
>>>comparisons between the two files' frequency representations.
>>>
>>>Regards,
>>>Dedric
>>>
>>>> Neil wrote:
>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>> The tests I did were completely blank down to -200 dB (far below the
>>
>>>>>> last
>>>>>
>>>>>> bit). It's safe to say there is no difference, even in
>>>>>> quantization noise, which by technical rights, is considered below
the
>>
>>>>>> level
>>>>>
>>>>>> of "cancellation" in such tests.
>>>>>
>>>>> I'm not necessarily talking about just the first bit or the
>>>>> last bit, but also everything in between... what happens on bit
>>>>> #12, for example? Everything on bit #12 should be audible, but
>>>>> in an a/b test what if thre are differences in what bits #8
>>>>> through #12 sound like, but the amplutide is stll the same on
>>>>> both files at that point, you'll get a null, right? Extrapolate
>>>>> that out somewhat & let's say there are differences in bits #8
>>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>>> etc through 43,972... Now this is breaking things down well
>>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>>> dn't know of any way we could extract JUST that information
>>>>> from each file & play it back for an a/b test; but would not
>>>>> that be enough to have to "null-able" files that do actually
>>>>> sound somewhat different?
>>>>>
>>>>> I guess what I'm saying is that since each sample in a musical
>>>>> track or full song file doesn't represent a pure, simple set of
>>>>> content like a sample of a sine wave would - there's a whole
>>>>> world of harmonic structure in each sample of a song file, and
>>>>> I think (although I'll admit - I can't "prove") that there is
>>>>> plenty of room for some variables between the first bit & the
>>>>> last bit while still allowing for a null test to be successful.
>>>>>
>>>>> No? Am I wacked out of my mind?
>>>>>
>>>>> Neil
>>>>>
>>>
>>
>
>
|
|
|
Re: (No subject) [message #77325 is a reply to message #77322] |
Fri, 22 December 2006 09:48 |
erlilo
Messages: 405 Registered: June 2005
|
Senior Member |
|
|
.....It seems you all have forgot to talk about the difference with listening
from different monitors, with different "room problems", that is summing the
results to our ears as a last instance.....
Erling
"LaMont" <jjdpro@ameritech.net> skrev i melding news:458c0fc6$1@linux...
>
> Thad,
> I think your points are valid. However I think the reason most Recording
> engineers don't like to talk Audio science, is because of the
> "unexplainable"
> anomolies that occurr with sound. Matters not if it's digital or analog,
> but rather how does it sound..
>
> We need factions like AES who discuss such theorectical and new ideas and
> advancements in audio reproduction. However, once the science down, then
> comes the art of it all.. Music is still the reason for whatwe are
> discussing.
> Music is emotional, yet is a science as well.
>
> For your camp to continue to de-value the human side (use your ears) of
> the
> equation is not right as well.
>
> I think both sides are right, but the science campers cannot speak to a
> guy
> who's main tool is his "ears" and not a scope.
>
>
> "TCB" <nobody@ishere.com> wrote:
>>
>>Actually, I wasn't referring specifically to you, I hear similar things
> all
>>the time all over the place. The first time I went through this on this
> forum
>>a couple of years ago was the Great CD Burning Speed Debate. In that one
>>Derek and I came up with about a gazillion ways to show that you could
>>rip-burn-rip-burn
>>over and over again at all kinds of different speeds and wind up with
>>exactly
>>the same data or audio CD. And I mean the same as in I slurped the whole
>>audio file *as a string* into perl and checked the samples. Having done
> that,
>>and thereby proven beyond a shadow of a doubt, I was told, roughly, that
>>clearly I couldn't hear well enough for these esoteric discussions. 'Use
>>your ears, dude.'
>>
>>So, it is possible to write a DAW with a filter on the master bus? Yes,
> of
>>course it is. Why anyone would want to do such a thing is beyond me since
>>there are conventions that are pretty much constant throughout the digital
>>audio world about how signals should be mixed together. So if DAW X is a
>>little more present in the second to the top octave (I think you mentioned
>>one being so) I would call that either a bug or mistaken perception. If
> I
>>could do the export file, flip polarity trick and the files didn't null
> I'd
>>say, 'Interesting, let's be sure my test is good. Is there an EQ on a
>>track
>>in one mix and not the other? Is there a group track that is doubling the
>>guitars in one mix and not the other?' If, on the other hand, the tracks
>>did null I'd say, 'Hmmmmmm, maybe I'm hearing a difference where there
>>isn't
>>one.'
>>
>>Lastly, and this is just a quirk for me, I find it odd that musicians and
>>audio engineers are so disinterested in taking seriously expert opinion.
>>This is rampant in the audiophile world where off the record the engineers
>>themselves will tell you they're not sure the $3k speaker cables they used
>>to hook up their new speaker line makes any difference. But with Neil, for
>>example, his mixes are 20 times better than mine for that kind of music.
>>If he gave me advice and opinion I would take it very seriously. But for
>>some reason when people like me and Dedric, who have developed extensive
>>knowledge into how computers work, are very often brushed off very
>>quickly.
>>Dedric isn't even a jerk about, while I'm a jerk about it only sometimes,
>>so I find that reaction to be, well, odd. But like I said, whatever gets
>>ya through the day, I'm not looking for converts and nobody is paying me
>>to post here.
>>
>>TCB
>>
>>"LaMont" <jjdpro@ameritech.net> wrote:
>>>
>>>Thad, I assume that you ar ereferring to me (using your ears).
>>>
>>>Look, I think we are talking about two differnt things here:
>>>
>>>1) Digital data
>>>
>>>2) Software (DAWS) coding
>>>
>>>You and Dedric have been concentrating on the laws of Digital audio.
>>>That's
>>>fine. But, I'm talking about the Software that we use to decode our
>>>digital
>>>audio.
>>>
>>>Like my previous post states, are we saying that DAW software can't be
> written
>>>for certain sonic results?
>>>
>>>
>>>"TCB" <nobody@ishere.com> wrote:
>>>>
>>>>Neil,
>>>>
>>>>You're using an analog waveform that is leading you to think incorrectly
>>>>about sampling. This is (very roughly) how it would look if you're
>>>>working
>>>>with 16 bit samples.
>>>>
>>>>0101010101010101
>>>>0101010111011101
>>>>0101110111010101
>>>>0101111111010100
>>>>0101110101111101
>>>>0111011101111101
>>>>0111110101110101
>>>>0100010111000100
>>>>0100011101010101
>>>>0001011100010101
>>>>0000010111111100
>>>>0001000001010111
>>>>0100000111110101
>>>>0111011101010000
>>>>0101011101000000
>>>>0101011111000101
>>>>0101010101010101
>>>>
>>>>The easiest way to think of how the sampler works is that it looks at
> the
>>>>incoming voltage to the converter and asks 'Is this in the top or bottom
>>>>half of the possible amplitudes I can measure.' If it's in the top half
>>>it
>>>>writes a 1, if it's in the bottom half, it writes a zero. The next bit
>>asks,
>>>>'Now that I know which half of my measurable voltage I'm looking at, is
>>>the
>>>>voltage in the top half of that half or the bottom half?' That's bit
>>>>number
>>>>two. Then it's on to, 'Now that I know what quarter it's it, is it in
> the
>>>>top or bottom half of that quarter?' And so on sixteen time giving it
> a
>>>resolution
>>>>of 2 to the sixteenth power.
>>>>
>>>>In other words, asking if the bits under the sample would sound is like
>>>asking
>>>>how the road would drive if it were 30 feet underground.
>>>>
>>>>Now then, to get back to the original argument, people like me (and I
> think
>>>>Dedric but I'll let him speak for himself) get a little hacked off when
>>>someone
>>>>says, 'you have to use your ears' when it's possible using various
>>>>computer
>>>>tools to check exactly how many of those samples match in two given
>>>>files.
>>>>The nulling trick is just a very easy way to get a quick read on one
>>>>aspect,
>>>>which is to answer the question 'do these two files match?' But there
> are
>>>>others and I've used them. And the sameness between properly written (by
>>>>which I mean lacking in serious bugs) audio applications is startling
> and
>>>>their differences so minor that other errors (analog cables, dust on the
>>>>speaker cone, humidity and temperature in the room) are far more likely
>>>to
>>>>cause a difference.
>>>>
>>>>Personally I think this all stems from romanticism about music and the
>>purity
>>>>of art. I have yet to hear someone tell me they need financial
>>>>calculations
>>>>down to 25 decimal points. They need them done to (at most) five decimal
>>>>points because the smallest commonly used financial divisor is the basis
>>>>point, or one one hundredth of a penny. So internally you calculate to
>>five
>>>>decimal places and round up or down from there and get on with your
>>>>life.
>>>>As geeky as finance guys can get, nobody ever says, 'You know, Thad,
>>>>that
>>>>last basis point just isn't really punchy enough for this deal. LBO guys
>>>>need really punchy returns, so can you run that calculation out a few
> more
>>>>bits to get a punchier basis point?' Scientists are also extremely
>>>>careful
>>>>to keep 'false precision' out of their calculations, so if one
>>>>instrument
>>>>will measure to four decimal points and the others will measure to 12
> they
>>>>understand that everything the higher resolution instruments measure
>>>>beyond
>>>>four accurate decimal points is worthless. They usually won't even
>>>>record
>>>>the data to be sure they don't claim greater precision than they have,
>>because
>>>>that's considered a horribly embarrassing junior high school mistake.
> But
>>>>musicians and audio engineers think that just because the data is sound
>>>data
>>>>somehow it enters a nebulous zone where that last one hundredth of a
>>>>penny
>>>>can be punchier. Hey, if it gets you through the day, that's fine by me,
>>>>but there are things about digital audio that can be proven true or
>>>>false
>>>>using the data. For things that can't be proven true or false with the
>>data
>>>>itself there is ABY testing, which is a controlled way to use the most
>>precise
>>>>audio measuring instruments available (our ears, at least until bats
>>>>will
>>>>wear headphones) to see if things sound different. When it's not in the
>>>data,
>>>>and it's not in the ABY, I say it doesn't exist.
>>>>
>>>>TCB
>>>>
>>>>"Neil" <IUOIU@OIU.com> wrote:
>>>>>
>>>>>Dedric - first of all, great explanation - esp. your 2nd
>>>>>paragraph. Next, let's take a look at something in the form of
>>>>>the best "graph" I can do in this NG's format... let's assume
>>>>>that each dot in the simple graph below is a sample point on a
>>>>>segment of a waveform, and let's futher assume that each "I"
>>>>>below represents four bits (I don't want to make it too
>>>>>vertically large, for ease of reading) - so we're dealing with
>>>>>a 16-bit wav file, with the 5th "dot" from the start point on
>>>>>the left being a full-amplitude, zero-db-line 16 bit sample.
>>>>>
>>>>>Now.... really, all I have to do to get a "null" is to have the
>>>>>amplitude match at each "dot" on the waveform, yes? This, of
>>>>>course, is a very simplistic graphic example, so bear with
>>>>>me... but if I have each "dot" matching in amplitude &
>>>>>therefore can get a null, what about the bits & content thereof
>>>>>in between the extremes between the maxes & zero-line
>>>>>crossings? Are you saying that there can be no variables in
>>>>>sound between those sections that would still result in a null?
>>>>>What about all the "I"'s that represent bits in between the
>>>>>maxes & the minimums?
>>>>>
>>>>> .
>>>>> . I .
>>>>> . I I I . What about the stuff in here?
>>>>> . I I I I I . .....or in here????
>>>>>. I I I I I I I .
>>>>>-------------------------------------
>>>>> . I I I I I I I .
>>>>> . I I I I I . Again, what about this region?
>>>>> . I I I . ... or this region?
>>>>> . I .
>>>>> .
>>>>>
>>>>>Neil
>>>>
>>>
>>
>
|
|
|
Re: (No subject) [message #77326 is a reply to message #77325] |
Fri, 22 December 2006 10:46 |
Jamie K
Messages: 1115 Registered: July 2006
|
Senior Member |
|
|
True, every part of the signal chain is important. But I don't think
anyone here is assuming otherwise.
For testing purposes, the monitors and room can be eliminated as a
variable by using the same monitors and room when auditioning the output
of several DAWs. And clamp your head into the exact same position. :^)
The idea is to design any test so that all variables except the DAWs are
eliminated. Then, if there is a difference, it can only be because the
DAWs handle audio files differently. And if there is no difference, they
handle audio files identically.
Cheers,
-Jamie
www.JamieKrutz.com
erlilo wrote:
> ....It seems you all have forgot to talk about the difference with listening
> from different monitors, with different "room problems", that is summing the
> results to our ears as a last instance.....
>
> Erling
>
>
> "LaMont" <jjdpro@ameritech.net> skrev i melding news:458c0fc6$1@linux...
>> Thad,
>> I think your points are valid. However I think the reason most Recording
>> engineers don't like to talk Audio science, is because of the
>> "unexplainable"
>> anomolies that occurr with sound. Matters not if it's digital or analog,
>> but rather how does it sound..
>>
>> We need factions like AES who discuss such theorectical and new ideas and
>> advancements in audio reproduction. However, once the science down, then
>> comes the art of it all.. Music is still the reason for whatwe are
>> discussing.
>> Music is emotional, yet is a science as well.
>>
>> For your camp to continue to de-value the human side (use your ears) of
>> the
>> equation is not right as well.
>>
>> I think both sides are right, but the science campers cannot speak to a
>> guy
>> who's main tool is his "ears" and not a scope.
>>
>>
>> "TCB" <nobody@ishere.com> wrote:
>>> Actually, I wasn't referring specifically to you, I hear similar things
>> all
>>> the time all over the place. The first time I went through this on this
>> forum
>>> a couple of years ago was the Great CD Burning Speed Debate. In that one
>>> Derek and I came up with about a gazillion ways to show that you could
>>> rip-burn-rip-burn
>>> over and over again at all kinds of different speeds and wind up with
>>> exactly
>>> the same data or audio CD. And I mean the same as in I slurped the whole
>>> audio file *as a string* into perl and checked the samples. Having done
>> that,
>>> and thereby proven beyond a shadow of a doubt, I was told, roughly, that
>>> clearly I couldn't hear well enough for these esoteric discussions. 'Use
>>> your ears, dude.'
>>>
>>> So, it is possible to write a DAW with a filter on the master bus? Yes,
>> of
>>> course it is. Why anyone would want to do such a thing is beyond me since
>>> there are conventions that are pretty much constant throughout the digital
>>> audio world about how signals should be mixed together. So if DAW X is a
>>> little more present in the second to the top octave (I think you mentioned
>>> one being so) I would call that either a bug or mistaken perception. If
>> I
>>> could do the export file, flip polarity trick and the files didn't null
>> I'd
>>> say, 'Interesting, let's be sure my test is good. Is there an EQ on a
>>> track
>>> in one mix and not the other? Is there a group track that is doubling the
>>> guitars in one mix and not the other?' If, on the other hand, the tracks
>>> did null I'd say, 'Hmmmmmm, maybe I'm hearing a difference where there
>>> isn't
>>> one.'
>>>
>>> Lastly, and this is just a quirk for me, I find it odd that musicians and
>>> audio engineers are so disinterested in taking seriously expert opinion.
>>> This is rampant in the audiophile world where off the record the engineers
>>> themselves will tell you they're not sure the $3k speaker cables they used
>>> to hook up their new speaker line makes any difference. But with Neil, for
>>> example, his mixes are 20 times better than mine for that kind of music.
>>> If he gave me advice and opinion I would take it very seriously. But for
>>> some reason when people like me and Dedric, who have developed extensive
>>> knowledge into how computers work, are very often brushed off very
>>> quickly.
>>> Dedric isn't even a jerk about, while I'm a jerk about it only sometimes,
>>> so I find that reaction to be, well, odd. But like I said, whatever gets
>>> ya through the day, I'm not looking for converts and nobody is paying me
>>> to post here.
>>>
>>> TCB
>>>
>>> "LaMont" <jjdpro@ameritech.net> wrote:
>>>> Thad, I assume that you ar ereferring to me (using your ears).
>>>>
>>>> Look, I think we are talking about two differnt things here:
>>>>
>>>> 1) Digital data
>>>>
>>>> 2) Software (DAWS) coding
>>>>
>>>> You and Dedric have been concentrating on the laws of Digital audio.
>>>> That's
>>>> fine. But, I'm talking about the Software that we use to decode our
>>>> digital
>>>> audio.
>>>>
>>>> Like my previous post states, are we saying that DAW software can't be
>> written
>>>> for certain sonic results?
>>>>
>>>>
>>>> "TCB" <nobody@ishere.com> wrote:
>>>>> Neil,
>>>>>
>>>>> You're using an analog waveform that is leading you to think incorrectly
>>>>> about sampling. This is (very roughly) how it would look if you're
>>>>> working
>>>>> with 16 bit samples.
>>>>>
>>>>> 0101010101010101
>>>>> 0101010111011101
>>>>> 0101110111010101
>>>>> 0101111111010100
>>>>> 0101110101111101
>>>>> 0111011101111101
>>>>> 0111110101110101
>>>>> 0100010111000100
>>>>> 0100011101010101
>>>>> 0001011100010101
>>>>> 0000010111111100
>>>>> 0001000001010111
>>>>> 0100000111110101
>>>>> 0111011101010000
>>>>> 0101011101000000
>>>>> 0101011111000101
>>>>> 0101010101010101
>>>>>
>>>>> The easiest way to think of how the sampler works is that it looks at
>> the
>>>>> incoming voltage to the converter and asks 'Is this in the top or bottom
>>>>> half of the possible amplitudes I can measure.' If it's in the top half
>>>> it
>>>>> writes a 1, if it's in the bottom half, it writes a zero. The next bit
>>> asks,
>>>>> 'Now that I know which half of my measurable voltage I'm looking at, is
>>>> the
>>>>> voltage in the top half of that half or the bottom half?' That's bit
>>>>> number
>>>>> two. Then it's on to, 'Now that I know what quarter it's it, is it in
>> the
>>>>> top or bottom half of that quarter?' And so on sixteen time giving it
>> a
>>>> resolution
>>>>> of 2 to the sixteenth power.
>>>>>
>>>>> In other words, asking if the bits under the sample would sound is like
>>>> asking
>>>>> how the road would drive if it were 30 feet underground.
>>>>>
>>>>> Now then, to get back to the original argument, people like me (and I
>> think
>>>>> Dedric but I'll let him speak for himself) get a little hacked off when
>>>> someone
>>>>> says, 'you have to use your ears' when it's possible using various
>>>>> computer
>>>>> tools to check exactly how many of those samples match in two given
>>>>> files.
>>>>> The nulling trick is just a very easy way to get a quick read on one
>>>>> aspect,
>>>>> which is to answer the question 'do these two files match?' But there
>> are
>>>>> others and I've used them. And the sameness between properly written (by
>>>>> which I mean lacking in serious bugs) audio applications is startling
>> and
>>>>> their differences so minor that other errors (analog cables, dust on the
>>>>> speaker cone, humidity and temperature in the room) are far more likely
>>>> to
>>>>> cause a difference.
>>>>>
>>>>> Personally I think this all stems from romanticism about music and the
>>> purity
>>>>> of art. I have yet to hear someone tell me they need financial
>>>>> calculations
>>>>> down to 25 decimal points. They need them done to (at most) five decimal
>>>>> points because the smallest commonly used financial divisor is the basis
>>>>> point, or one one hundredth of a penny. So internally you calculate to
>>> five
>>>>> decimal places and round up or down from there and get on with your
>>>>> life.
>>>>> As geeky as finance guys can get, nobody ever says, 'You know, Thad,
>>>>> that
>>>>> last basis point just isn't really punchy enough for this deal. LBO guys
>>>>> need really punchy returns, so can you run that calculation out a few
>> more
>>>>> bits to get a punchier basis point?' Scientists are also extremely
>>>>> careful
>>>>> to keep 'false precision' out of their calculations, so if one
>>>>> instrument
>>>>> will measure to four decimal points and the others will measure to 12
>> they
>>>>> understand that everything the higher resolution instruments measure
>>>>> beyond
>>>>> four accurate decimal points is worthless. They usually won't even
>>>>> record
>>>>> the data to be sure they don't claim greater precision than they have,
>>> because
>>>>> that's considered a horribly embarrassing junior high school mistake.
>> But
>>>>> musicians and audio engineers think that just because the data is sound
>>>> data
>>>>> somehow it enters a nebulous zone where that last one hundredth of a
>>>>> penny
>>>>> can be punchier. Hey, if it gets you through the day, that's fine by me,
>>>>> but there are things about digital audio that can be proven true or
>>>>> false
>>>>> using the data. For things that can't be proven true or false with the
>>> data
>>>>> itself there is ABY testing, which is a controlled way to use the most
>>> precise
>>>>> audio measuring instruments available (our ears, at least until bats
>>>>> will
>>>>> wear headphones) to see if things sound different. When it's not in the
>>>> data,
>>>>> and it's not in the ABY, I say it doesn't exist.
>>>>>
>>>>> TCB
>>>>>
>>>>> "Neil" <IUOIU@OIU.com> wrote:
>>>>>> Dedric - first of all, great explanation - esp. your 2nd
>>>>>> paragraph. Next, let's take a look at something in the form of
>>>>>> the best "graph" I can do in this NG's format... let's assume
>>>>>> that each dot in the simple graph below is a sample point on a
>>>>>> segment of a waveform, and let's futher assume that each "I"
>>>>>> below represents four bits (I don't want to make it too
>>>>>> vertically large, for ease of reading) - so we're dealing with
>>>>>> a 16-bit wav file, with the 5th "dot" from the start point on
>>>>>> the left being a full-amplitude, zero-db-line 16 bit sample.
>>>>>>
>>>>>> Now.... really, all I have to do to get a "null" is to have the
>>>>>> amplitude match at each "dot" on the waveform, yes? This, of
>>>>>> course, is a very simplistic graphic example, so bear with
>>>>>> me... but if I have each "dot" matching in amplitude &
>>>>>> therefore can get a null, what about the bits & content thereof
>>>>>> in between the extremes between the maxes & zero-line
>>>>>> crossings? Are you saying that there can be no variables in
>>>>>> sound between those sections that would still result in a null?
>>>>>> What about all the "I"'s that represent bits in between the
>>>>>> maxes & the minimums?
>>>>>>
>>>>>> .
>>>>>> . I .
>>>>>> . I I I . What about the stuff in here?
>>>>>> . I I I I I . .....or in here????
>>>>>> . I I I I I I I .
>>>>>> -------------------------------------
>>>>>> . I I I I I I I .
>>>>>> . I I I I I . Again, what about this region?
>>>>>> . I I I . ... or this region?
>>>>>> . I .
>>>>>> .
>>>>>>
>>>>>> Neil
>
>
|
|
|
Re: (No subject) [message #77327 is a reply to message #77322] |
Fri, 22 December 2006 12:07 |
TCB
Messages: 1261 Registered: July 2007
|
Senior Member |
|
|
First of all, I don't represent a 'camp.' Second, music is emotional for us,
not the computer. The computer doesn't care if it's calculating reverb tails
or running SQL queries. So there _are_ some things about digital audio that
we can say are true or false absolutely, or to a certain predictable degree
of error. To argue against those things with the 'use your ears' argument
is as useful as arguing about gravity, and whether you believe in gravity
or not you still ain't gonna fall up when you jump off the park bench. For
the rest, I've never said here or anywhere that ears shouldn't be used, but
only that claims should be backed up by repeatable, statistically significant
results in careful tests. So 'everybody who can hear will hear this' doesn't
cut it for me. 'We did this test under these conditions and these were the
results' does.
TCB
"LaMont" <jjdpro@ameritech.net> wrote:
>
>Thad,
>I think your points are valid. However I think the reason most Recording
>engineers don't like to talk Audio science, is because of the "unexplainable"
>anomolies that occurr with sound. Matters not if it's digital or analog,
>but rather how does it sound..
>
>We need factions like AES who discuss such theorectical and new ideas and
>advancements in audio reproduction. However, once the science down, then
>comes the art of it all.. Music is still the reason for whatwe are discussing.
>Music is emotional, yet is a science as well.
>
>For your camp to continue to de-value the human side (use your ears) of
the
>equation is not right as well.
>
>I think both sides are right, but the science campers cannot speak to a
guy
>who's main tool is his "ears" and not a scope.
>
>
>"TCB" <nobody@ishere.com> wrote:
>>
>>Actually, I wasn't referring specifically to you, I hear similar things
>all
>>the time all over the place. The first time I went through this on this
>forum
>>a couple of years ago was the Great CD Burning Speed Debate. In that one
>>Derek and I came up with about a gazillion ways to show that you could
rip-burn-rip-burn
>>over and over again at all kinds of different speeds and wind up with exactly
>>the same data or audio CD. And I mean the same as in I slurped the whole
>>audio file *as a string* into perl and checked the samples. Having done
>that,
>>and thereby proven beyond a shadow of a doubt, I was told, roughly, that
>>clearly I couldn't hear well enough for these esoteric discussions. 'Use
>>your ears, dude.'
>>
>>So, it is possible to write a DAW with a filter on the master bus? Yes,
>of
>>course it is. Why anyone would want to do such a thing is beyond me since
>>there are conventions that are pretty much constant throughout the digital
>>audio world about how signals should be mixed together. So if DAW X is
a
>>little more present in the second to the top octave (I think you mentioned
>>one being so) I would call that either a bug or mistaken perception. If
>I
>>could do the export file, flip polarity trick and the files didn't null
>I'd
>>say, 'Interesting, let's be sure my test is good. Is there an EQ on a track
>>in one mix and not the other? Is there a group track that is doubling the
>>guitars in one mix and not the other?' If, on the other hand, the tracks
>>did null I'd say, 'Hmmmmmm, maybe I'm hearing a difference where there
isn't
>>one.'
>>
>>Lastly, and this is just a quirk for me, I find it odd that musicians and
>>audio engineers are so disinterested in taking seriously expert opinion.
>>This is rampant in the audiophile world where off the record the engineers
>>themselves will tell you they're not sure the $3k speaker cables they used
>>to hook up their new speaker line makes any difference. But with Neil,
for
>>example, his mixes are 20 times better than mine for that kind of music.
>>If he gave me advice and opinion I would take it very seriously. But for
>>some reason when people like me and Dedric, who have developed extensive
>>knowledge into how computers work, are very often brushed off very quickly.
>>Dedric isn't even a jerk about, while I'm a jerk about it only sometimes,
>>so I find that reaction to be, well, odd. But like I said, whatever gets
>>ya through the day, I'm not looking for converts and nobody is paying me
>>to post here.
>>
>>TCB
>>
>>"LaMont" <jjdpro@ameritech.net> wrote:
>>>
>>>Thad, I assume that you ar ereferring to me (using your ears).
>>>
>>>Look, I think we are talking about two differnt things here:
>>>
>>>1) Digital data
>>>
>>>2) Software (DAWS) coding
>>>
>>>You and Dedric have been concentrating on the laws of Digital audio.
That's
>>>fine. But, I'm talking about the Software that we use to decode our digital
>>>audio.
>>>
>>>Like my previous post states, are we saying that DAW software can't be
>written
>>>for certain sonic results?
>>>
>>>
>>>"TCB" <nobody@ishere.com> wrote:
>>>>
>>>>Neil,
>>>>
>>>>You're using an analog waveform that is leading you to think incorrectly
>>>>about sampling. This is (very roughly) how it would look if you're working
>>>>with 16 bit samples.
>>>>
>>>>0101010101010101
>>>>0101010111011101
>>>>0101110111010101
>>>>0101111111010100
>>>>0101110101111101
>>>>0111011101111101
>>>>0111110101110101
>>>>0100010111000100
>>>>0100011101010101
>>>>0001011100010101
>>>>0000010111111100
>>>>0001000001010111
>>>>0100000111110101
>>>>0111011101010000
>>>>0101011101000000
>>>>0101011111000101
>>>>0101010101010101
>>>>
>>>>The easiest way to think of how the sampler works is that it looks at
>the
>>>>incoming voltage to the converter and asks 'Is this in the top or bottom
>>>>half of the possible amplitudes I can measure.' If it's in the top half
>>>it
>>>>writes a 1, if it's in the bottom half, it writes a zero. The next bit
>>asks,
>>>>'Now that I know which half of my measurable voltage I'm looking at,
is
>>>the
>>>>voltage in the top half of that half or the bottom half?' That's bit
number
>>>>two. Then it's on to, 'Now that I know what quarter it's it, is it in
>the
>>>>top or bottom half of that quarter?' And so on sixteen time giving it
>a
>>>resolution
>>>>of 2 to the sixteenth power.
>>>>
>>>>In other words, asking if the bits under the sample would sound is like
>>>asking
>>>>how the road would drive if it were 30 feet underground.
>>>>
>>>>Now then, to get back to the original argument, people like me (and I
>think
>>>>Dedric but I'll let him speak for himself) get a little hacked off when
>>>someone
>>>>says, 'you have to use your ears' when it's possible using various computer
>>>>tools to check exactly how many of those samples match in two given files.
>>>>The nulling trick is just a very easy way to get a quick read on one
aspect,
>>>>which is to answer the question 'do these two files match?' But there
>are
>>>>others and I've used them. And the sameness between properly written
(by
>>>>which I mean lacking in serious bugs) audio applications is startling
>and
>>>>their differences so minor that other errors (analog cables, dust on
the
>>>>speaker cone, humidity and temperature in the room) are far more likely
>>>to
>>>>cause a difference.
>>>>
>>>>Personally I think this all stems from romanticism about music and the
>>purity
>>>>of art. I have yet to hear someone tell me they need financial calculations
>>>>down to 25 decimal points. They need them done to (at most) five decimal
>>>>points because the smallest commonly used financial divisor is the basis
>>>>point, or one one hundredth of a penny. So internally you calculate to
>>five
>>>>decimal places and round up or down from there and get on with your life.
>>>>As geeky as finance guys can get, nobody ever says, 'You know, Thad,
that
>>>>last basis point just isn't really punchy enough for this deal. LBO guys
>>>>need really punchy returns, so can you run that calculation out a few
>more
>>>>bits to get a punchier basis point?' Scientists are also extremely careful
>>>>to keep 'false precision' out of their calculations, so if one instrument
>>>>will measure to four decimal points and the others will measure to 12
>they
>>>>understand that everything the higher resolution instruments measure
beyond
>>>>four accurate decimal points is worthless. They usually won't even record
>>>>the data to be sure they don't claim greater precision than they have,
>>because
>>>>that's considered a horribly embarrassing junior high school mistake.
>But
>>>>musicians and audio engineers think that just because the data is sound
>>>data
>>>>somehow it enters a nebulous zone where that last one hundredth of a
penny
>>>>can be punchier. Hey, if it gets you through the day, that's fine by
me,
>>>>but there are things about digital audio that can be proven true or false
>>>>using the data. For things that can't be proven true or false with the
>>data
>>>>itself there is ABY testing, which is a controlled way to use the most
>>precise
>>>>audio measuring instruments available (our ears, at least until bats
will
>>>>wear headphones) to see if things sound different. When it's not in the
>>>data,
>>>>and it's not in the ABY, I say it doesn't exist.
>>>>
>>>>TCB
>>>>
>>>>"Neil" <IUOIU@OIU.com> wrote:
>>>>>
>>>>>Dedric - first of all, great explanation - esp. your 2nd
>>>>>paragraph. Next, let's take a look at something in the form of
>>>>>the best "graph" I can do in this NG's format... let's assume
>>>>>that each dot in the simple graph below is a sample point on a
>>>>>segment of a waveform, and let's futher assume that each "I"
>>>>>below represents four bits (I don't want to make it too
>>>>>vertically large, for ease of reading) - so we're dealing with
>>>>>a 16-bit wav file, with the 5th "dot" from the start point on
>>>>>the left being a full-amplitude, zero-db-line 16 bit sample.
>>>>>
>>>>>Now.... really, all I have to do to get a "null" is to have the
>>>>>amplitude match at each "dot" on the waveform, yes? This, of
>>>>>course, is a very simplistic graphic example, so bear with
>>>>>me... but if I have each "dot" matching in amplitude &
>>>>>therefore can get a null, what about the bits & content thereof
>>>>>in between the extremes between the maxes & zero-line
>>>>>crossings? Are you saying that there can be no variables in
>>>>>sound between those sections that would still result in a null?
>>>>>What about all the "I"'s that represent bits in between the
>>>>>maxes & the minimums?
>>>>>
>>>>> .
>>>>> . I .
>>>>> . I I I . What about the stuff in here?
>>>>> . I I I I I . .....or in here????
>>>>>. I I I I I I I .
>>>>>-------------------------------------
>>>>> . I I I I I I I .
>>>>> . I I I I I . Again, what about this region?
>>>>> . I I I . ... or this region?
>>>>> . I .
>>>>> .
>>>>>
>>>>>Neil
>>>>
>>>
>>
>
|
|
|
Re: (No subject) [message #77328 is a reply to message #77311] |
Fri, 22 December 2006 12:24 |
chuck duffy
Messages: 453 Registered: July 2005
|
Senior Member |
|
|
"TCB" <nobody@ishere.com> wrote:
"As geeky as finance guys can get, nobody ever says, 'You know, Thad, that
last basis point just isn't really punchy enough for this deal. LBO guys
need really punchy returns, so can you run that calculation out a few more
bits to get a punchier basis point?'"
Don't be so sure. A long, long time ago I wrote some "summing" code for
a general ledger. One of the outputs of the system was an income statement.
A long discussion ensued, amongst some very bright people, about where to
do the rounding. There were many camps.
Were we to round the individual transactions within a GL number, and sum
these?
Were we to sum the individual transactions unrounded, then round the total
of the GL number?
Were we to sum the unrounded GL numbers associated with a specific income
statement line, then round that total?
People are funny.
Chuck
|
|
|
Re: (No subject) [message #77331 is a reply to message #77328] |
Fri, 22 December 2006 13:18 |
TCB
Messages: 1261 Registered: July 2007
|
Senior Member |
|
|
But were the totals warm and punchy? If so, then you did the summing right.
Seriously, though, it can matter when this kind of rounding takes place.
A lot of times it's getting statements from custodial banks to match what's
internal, and they might round differently than the logic of the internal
database. And if you're running, say, $19 billion plus those rounding errors
can add up to more than the cost of a smoothie or two. Still, in contrast
to audio people the finance types are interested in managing inevitable imprecision
instead of finding precision where there really is none. At least usually
they are . . .
TCB
"chuck duffy" <c@c.com> wrote:
>
>"TCB" <nobody@ishere.com> wrote:
>
>"As geeky as finance guys can get, nobody ever says, 'You know, Thad, that
>last basis point just isn't really punchy enough for this deal. LBO guys
>need really punchy returns, so can you run that calculation out a few more
>bits to get a punchier basis point?'"
>
>Don't be so sure. A long, long time ago I wrote some "summing" code for
>a general ledger. One of the outputs of the system was an income statement.
>
>
>A long discussion ensued, amongst some very bright people, about where to
>do the rounding. There were many camps.
>
>Were we to round the individual transactions within a GL number, and sum
>these?
>
>Were we to sum the individual transactions unrounded, then round the total
>of the GL number?
>
>Were we to sum the unrounded GL numbers associated with a specific income
>statement line, then round that total?
>
>People are funny.
>
>Chuck
>
>
|
|
|
Re: (No subject)...What's up inder the hood? [message #77333 is a reply to message #77324] |
Fri, 22 December 2006 13:57 |
Dedric Terry
Messages: 788 Registered: June 2007
|
Senior Member |
|
|
Lamont - what is the output chain you are using for each app when comparing
the file in Nuendo
vs ProTools? On the same PC, I presume (and is this PT HD or M-Powered?)?
Since these can't use the same output driver, you would have to depend on
the D/A being
the same, but clocking will be different unless you have a master clock, and
both interfaces
are locking with the same accuracy. This was one of the issues that came up
at Lynn Fuston's
D/A converter shootout - when do you lock to external clock and incur the
resulting jitter,
and when do you trust the internal clock - and if you do lock externally,
how good is the PLL
in the slave device? These issues can cause audible changes in the top end
that have nothing to do
with the software itself. If you say that PTHD through the same converter
output as Nuendo (via? RME?
Lynx?) using the same master clock, sounds different playing a single audio
file, then I take your word
for it. I can't tell you why that is happening - only that an audible
difference really shouldn't happen due
to the software alone - not with a single audio file, esp. since I've heard
and seen PTHD audio cancel with
native DAWs. Just passing a single 16 or 24 bit track down the buss to the
output driver should
be, and usually is, completely transparent, bit for bit.
The same audio file played through the same converters should only sound
different if something in
the chain is different - be it clocking, gain or some degree of unintended,
errant dsp processing. Every DAW should
pass a single audio file without altering a single bit. That's a basic level
of accuracy we should always
expect of any DAW. If that accuracy isn't there, you can be sure a heavy
mix will be altered in ways you
didn't intend, even though you would end up mixing with that factor in place
(e.g. you still mix for what
you want to hear regardless of what the platform does to each audio track or
channel).
In fact you should be able to send a stereo audio track out SPDIF or
lightpipe to another DAW, record it
bring the recorded file back in, line them up to the first bit, and have
them cancel on and inverted phase
test. I did this with Nuendo and Cubase 4 on separate machines just to be
sure my master clocking and
slave sync was accurate - it worked perfectly.
Also be sure there isn't a variation in the gain even by 0.1 dB between the
two. There shouldn't
and I wouldn't expect there to be one. Also could PT be set for a different
pan law? Shouldn't make a
difference even if comparing two mono panned files to their stereo
interleaved equivalent, but for sake
of completeness it's worth checking as well. A variation in the output
chain, be it drivers, audio card
card, or converters would be the most likely culprit here.
The reason DAW manufacturers wouldn't add any sonic "character"
intentionally is that the
ultimate goal from day one with recording has been to accurately reproduce
what we hear.
We developed a musical penchant for sonic character because the hardware
just wasn't accurate,
and what it did often sent us down new creative paths - even if by force -
and we decided it was
preferred that way.
Your point about what goes into the feature presets to sell synths is right
for sure, but synths are about
character and getting that "perfect piano" or crystal clear bell pad, or fat
punchy bass without spending
a mint on development, adding 50G onboard sample libraries, or costing $15k,
so what they
lack in actual synthesis capabilities, they make up with EQ and effects on
the output. That's been the case
for years, at least since we had effects on synths at least. But even with
modern synths such as the Fantom,
Tritons, etc, which are great synths all around, of course the coolest,
widest and biggest patches
will make the biggest impression - so in come the EQs, limiters, comps,
reverbs, chorus, etc. The best
way to find out if a synth is really good is to bypass all effects and see
what happens. Most are pretty
good these days, but about half the time, there are presets that fall
completely flat in fx bypass.
DAWs aren't designed to put a sonic fingerprint on a sound the way synths
are - they are designed
to *not* add anything - to pass through what we create as users, with no
alteration (or as little as possible)
beyond what we add with intentional processing (EQ, comps, etc). Developers
would find no pride
in hearing that their DAW sounds anything different than whatever is being
played back in it,
and the concept is contrary to what AES and IEEE proceedings on the issue
propose in general
digital audio discussions, white papers, etc.
What ID ended up doing with Paris (at least from what I gather per Chuck's
findings - so correct me if I'm missing part of the equation Chuck),
is drop the track gain by 20dB or so, then added it back at the master buss
to create the effect of headroom (probably
because the master buss is really summing on the card, and they have more
headroom there than on the tracks
where native plugins might be used). I don't know if Paris passed 32-bit
float files to the EDS card, but sort of
doubt it. I think Chuck has clarified this at one point, but don't recall
the answer.
Also what Paris did is use a greater bit depth on the hardware than ProTools
did - at the time PT was just
bring Mix+ systems to market, or they had been out for a year or two (if I
have my timeline right) - they
were 24-bit fixed all the way through. Logic and Cubase were native DAWs,
but native was still too slow
to compete with hardware hybrids. Paris trumped them all by running 32-bit
float natively (not new really, but
better than sticking to 24-bit) and 56 or so bits in hardware instead of
going to Motorola DSPs at 24.
The onboard effects were also a step up from anything out there, so the demo
did sound good.
I don't recall which, but one of the demos, imho, wasn't so good (some
sloppy production and
vocals in spots, IIRC), so I only listened to it once. ;-)
Coupled with the gain drop and buss makeup, this all gave it a "headroom" no
one else had. With very nice
onboard effects, Paris jumped ahead of anything else out there easily, and
still respectably holds its' own today
in that department.
Most demos I hear (when I listen to them) vary in quality, usually not so
great in some area. But if a demo does
sound great, then it at least says that the product is capable of at least
that level of performance, and it can
only help improve a prospective buyer's impression of it.
Regards,
Dedric
"LaMont " <jjdpro@ameritech.net> wrote in message news:458c14c0$1@linux...
>
> Dedric good post..
>
> However, I have PT-M-Powered/M-audio 410 interface for my laptop and it
> has
> that same sound (no eq, zero fader) that HD does. I know their use the
> same
> 48 bit fix mixer. I load up the same file in Nuendo (no eq, zero
> fader)..results.
> different sonic character.
>
> PT having a top end touch..Nuendo, nice smooth(flat) sound. And I'm just
> taking about a stereo wav file nulled with no eq..nothing ..zilch..nada..
>
> Now, there are devices (keyboards, dum machines) on the market today that
> have a Master Buss Compressor and EQ set to on with the top end notched
> up.
> Why? because it gives their product an competitive advantageover the
> competition..
> Ex: Yahama's Motif ES, Akai's MPC 1000, 2500, Roland's Fantom.
>
> So, why would'nt a DAW manufactuer code in an extra (ooommf) to make their
> DAW sound better. Especially, given the "I hate Digtal Summing" crowd?
> And,
> If I'm a DAW manufactuer, what would give my product a sonic edge over the
> competition?
>
> We live in the "louder is better" audio world these days, so a DAW that
> can
> catch my attention 'sonically" will probaly will get the sell. That's what
> happend to me back in 1997 when I heard Paris. I was floored!!! Still to
> this day, nothing has floored me like that "Road House Blues Demo" I heard
> on Paris.
>
> Was it the hardware ? was it the software. I remember talking with Edmund
> at the 2000 winter Namm, and told me that he & Steve set out to reproduce
> the sonics of big buck analog board (eq's) and all.. And, summing was a
> big
> big issue for them because they (ID) thought that nobody has gotten
> it(summing)
> right. And by right, they meant, behaved like a console with a wide lane
> for all of those tracks..
>
>
>
>
> "Dedric Terry" <dedric@echomg.com> wrote:
>>"LaMont" <jjdpro@ameritech.net> wrote in message news:458be8d5$1@linux...
>>>
>>> Okay...
>>> I guess what I'm saying is this:
>>>
>>> -Is it possible that diferent DAW manufactuers "code" their app
>>> differently
>>> for sound results.
>>
>>Of course it is *possible* to do this, but only if the DAW has a specific
>
>>sound shaping purpose
>>beyond normal summing/mixing. Users talk about wanting developers to add
> a
>>"Neve sound" or "API sound" option to summing engines,
>>but that's really impractical given the amount of dsp required to make a
>
>>decent emulation (with convolution, dynamic EQ functions,
>>etc). For sake of not eating up all cpu processing, that could likely
>>only
>
>>surface as is a built in EQ, which
>>no one wants universally in summing, and anyone can add at will already.
>>
>>So it hasn't happened yet and isn't likely to as it detours from the basic
>
>>tenant of audio recording - recreate what comes in as
>>accurately as possible.
>>
>>What Digi did in recoding their summing engine was try to recover some
>>of the damage done by the 24-bit buss in Mix systems. Motorola 56k dsps
> are
>>24-bit fixed point chips and I think
>>the new generation (321?) still is, but they use double words now for
>>48-bits). And though plugins could process at 48-bit by
>>doubling up and using upper and lower 24-bit words for 48-bit outputs, the
>
>>buss
>>between chips was 24-bits, so they had to dither to 24-bits after every
>
>>plugin. The mixer (if I recall correctly) also
>>had a 24-bit buss, so what Digi did is to add a dither stage to the mixer
> to
>>prevent this
>>constant truncation of data. 24-bits isn't enough to cover summing for
> more
>>than a few tracks without
>>losing information in the 16-bit world, and in the 24-bit world some
>>information will be lost, at least at the lowest levels.
>>
>>Adding a dither stage (though I think they did more than that - perhaps
>
>>implement a 48-bit double word stage as well),
>>simply smoothed over the truncation that was happening, but it didn't
>>solve
>
>>the problem, so with HD
>>they went to a double-word path - throughout I believe, including the path
>
>>between chips. I believe the chips
>>are still 24-bit, but by doubling up the processing (yes at a cost of
>>twice
>
>>the overhead), they get a 48-bit engine.
>>This not only provided better headroom, but greater resolution. Higher
> bit
>>depths subdivide the amplitude with greater resolution, and that's
>>really where we get the definition of dynamic range - by lowering the
>>signal
>
>>to quantization noise ratio.
>>
>>With DAWs that use 32-bit floating point math all the way through, the
>>only
>
>>reason for altering the summing
>>is by error, and that's an error that would actually be hard to make and
> get
>>past a very basic alpha stage of testing.
>>There is a small difference in fixed point math and floating point math,
> or
>>at least a theoretical difference in how it affects audio
>>in certain cases, but not necessarily in the result for calculating gain
> in
>>either for the same audio file. Where any differences might show up is
>
>>complicated, and I believe only appear at levels below 24-bit (or in
>>headroom with tracks pushed beyond 0dBFS), or when/if
>>there areany differences in where each amplitude level is quantized.
>>
>>Obviously there can be differences if the DAW has to use varying bit
>>depths
>
>>throughout a single summing path to accomodate hardware
>>as well as software summing, since there may be truncation or rounding
>>along
>
>>the way, but that impacts the lowest bit
>>level, and hence - spacial reproduction, reverb tails perhaps, and
>>"depth",
>
>>not the levels most music so the differences are most
>>often more subtle than not. But most modern DAWs have eliminated those
>
>>"rough edges" in the math by increasing the bit depth to accomodate normal
>
>>summing required for mixing audio.
>>
>>So with Lynn's unity gain summing test (A files on the CD I believe), DAWs
>
>>were never asked to sum beyond 24-bits,
>>at least not on the upper end of the dynamic range, so everything that
>>could
>
>>represent 24-bits accurately would cancel. The only ones
>>that didn't were ones that had a different bit depth and/or gain structure
>
>>whether hybrid or native
>>(e.g. Paris' subtracting 20dB from tracks and adding it to the buss). In
>
>>this case, PTHD cancelled (when I tested it) with
>>Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed vs.
>
>>32-bit float wasn't a factor.
>>
>>When trying other tests, even when adding and subtracting gain, Nuendo,
>
>>Sequoia and Sonar cancel - both audibly and
>>visually at inaudible levels, which only proves that one isn't making an
>
>>error when calculating basic gain. Since a dB is well defined,
>>and the math to add gain is simple, they shouldn't. The fact that they
> all
>>use 32-bit float all the way through eliminates a difference
>>in data structure as well, and this just verifies that. There was a time
>
>>that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor went,
>>but it's 32-bit float all the way through now just as Sonar,
>>Nuendo/Cubase,
>
>>Samplitude/Sequoia, DP, Audition (I presume at least).
>>I don't know what Acid or Live use. Saw promotes a fixed point engine,
> but
>>I don't know if it is still 24-bit, or now 48 bit.
>>That was an intentional choice by the developer, but he's the only one I
>
>>know of that stuck with 24-bit for summing
>>intentionally, esp. after the Digi Mix system mixer incident.
>>
>>Long answer, but to sum up, it is certainly physically *possible* for a
>
>>developer to code something differently intentionally, but not
>>in reality likely since it would be breaking some basic fixed point or
>>floating point math rules. Where the differences really
>>showed up in the past is with PT Mix systems where the limitation was
>>really
>
>>significant - e.g. 24 bit with truncation at several stages.
>>
>>That really isn't such an issue anymore. Given the differences in
>>workflow,
>
>>missing something in workflow or layout differences
>>is easy enough to do (e.g. Sonar doesn't have group and busses the way
>>Nuendo does, as it's outputs are actually driver outputs,
>>not software busses, so in Sonar, busses are actually outputs, and sub
>>busses are actually busses in Nuendo. There are no,
>>or at least I haven't found the equivalent of a Nuendo group in Sonar -
> that
>>affects the results of some tests (though not basic
>>summing) if not taken into account, but when taken into account, they work
>
>>exactly the same way).
>>
>>So at least when talking about apps with 32-bit float all the way through,
>
>>it's safe to say (since it has been proven) that summing isn't different
>
>>unless
>>there is an error somewhere, or variation in how the user duplicates the
>
>>same mix in two different apps.
>>
>>Imho, that's actually a very good thing - approaching a more consistent
>
>>basis for recording and mixing from which users can make all
>>of the decisions as to how the final product will sound and not be
>>required
>
>>to decide when purchasing a pricey console, and have to
>>focus their business on clients who want "that sound". I believe we are
>
>>actually closer to the pure definition of recording now than
>>we once were.
>>
>>Regards,
>>Dedric
>>
>>
>>>
>>> I the answer is yes, then,the real task is to discover or rather
>>> un-cover
>>> what's say: Motu's vision of summing, versus Digidesign, versus
>>> Steinberg
>>> and so on..
>>>
>>> What's under the hood. To me and others,when Digi re-coded their summing
>>> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
>
>>> bump.
>>> Where as Steinberg's summing is very neutral.
>>>
>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>Hi Neil,
>>>>
>>>>Jamie is right. And you aren't wacked out - you are thinking this
>>>>through
>>>
>>>>in a reasonable manner, but coming to the wrong
>>>>conclusion - easy to do given how confusing digital audio can be. Each
>>> word
>>>>represents an amplitude
>>>>point on a single curve that is changing over time, and can vary with
> a
>>>
>>>>speed up to the Nyquist frequency (as Jamie described).
>>>>The complex harmonic content we hear is actually the frequency
>>>>modulation
>>> of
>>>>a single waveform,
>>>>that over a small amount of time creates the sound we translate - we
>>>>don't
>>>
>>>>really hear a single sample at a time,
>>>>but thousands of samples at a time (1 sample alone could at most
>>>>represent
>>> a
>>>>single positive or negative peak
>>>>of a 22,050Hz waveform).
>>>>
>>>>If one bit doesn't cancel, esp. if it's a higher order bit than number
> 24,
>>>
>>>>you may hear, and will see that easily,
>>>>and the higher the bit in the dynamic range (higher order) the more
>>>>audible
>>>
>>>>the difference.
>>>>Since each bit is 6dB of dynamic range, you can extrapolate how "loud"
>
>>>>that
>>>
>>>>bit's impact will be
>>>>if there is a variation.
>>>>
>>>>Now, obviously if we are talking about 1 sample in a 44.1k rate song,
> then
>>>
>>>>it simply be a
>>>>click (only audible if it's a high enough order bit) instead of an
>>>>obvious
>>>
>>>>musical difference, but that should never
>>>>happen in a phase cancellation test between identical files higher than
>>> bit
>>>>24, unless there are clock sync problems,
>>>>driver issues, or the DAW is an early alpha version. :-)
>>>>
>>>>By definition of what DAWs do during playback and record, every audio
>
>>>>stream
>>>
>>>>has the same point in time (judged by the timeline)
>>>>played back sample accurately, one word at a time, at whatever sample
>
>>>>rate
>>>
>>>>we are using. A phase cancellation test uses that
>>>>fact to compare two audio files word for word (and hence bit for bit
>>>>since
>>>
>>>>each bit of a 24-bit word would
>>>>be at the same bit slot in each 24-bit word). Assuming they are aligned
>>> to
>>>>the same start point, sample
>>>>accurately, and both are the same set of sample words at each sample
>>>>point,
>>>
>>>>bit for bit, and one is phase inverted,
>>>>they will cancel through all 24 bits. For two files to cancel
>>>>completely
>>>
>>>>for the duration of the file, each and every bit in each word
>>>>must be the exact opposite of that same bit position in a word at the
> same
>>>
>>>>sample point. This is why zooming in on an FFT
>>>>of the full difference file is valuable as it can show any differences
> in
>>>
>>>>the lower order bits that wouldn't be audible. So even if
>>>>there is no audible difference, the visual followup will show if the two
>>>
>>>>files truly cancel even a levels below hearing, or
>>>>outside of a frequency change that we will perceive.
>>>>
>>>>When they don't cancel, usually there will be way more than 1 bit
>>>>difference - it's usually one or more bits in the words for
>>>>thousands of samples. From a musical standpoint this is usually in a
>>>>frequency range (low freq, or high freq most often) - that will
>>>>show up as the difference between them, and that usually happens due to
>>> some
>>>>form of processing difference between the files,
>>>>such as EQ, compression, frequency dependant gain changes, etc. That is
>>> what
>>>>I believe you are thinking through, but when
>>>>talking about straight summing with no gain change (or known equal gain
>>>
>>>>changes), we are only looking at linear, one for one
>>>>comparisons between the two files' frequency representations.
>>>>
>>>>Regards,
>>>>Dedric
>>>>
>>>>> Neil wrote:
>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>> The tests I did were completely blank down to -200 dB (far below the
>>>
>>>>>>> last
>>>>>>
>>>>>>> bit). It's safe to say there is no difference, even in
>>>>>>> quantization noise, which by technical rights, is considered below
> the
>>>
>>>>>>> level
>>>>>>
>>>>>>> of "cancellation" in such tests.
>>>>>>
>>>>>> I'm not necessarily talking about just the first bit or the
>>>>>> last bit, but also everything in between... what happens on bit
>>>>>> #12, for example? Everything on bit #12 should be audible, but
>>>>>> in an a/b test what if thre are differences in what bits #8
>>>>>> through #12 sound like, but the amplutide is stll the same on
>>>>>> both files at that point, you'll get a null, right? Extrapolate
>>>>>> that out somewhat & let's say there are differences in bits #8
>>>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>>>> etc through 43,972... Now this is breaking things down well
>>>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>>>> dn't know of any way we could extract JUST that information
>>>>>> from each file & play it back for an a/b test; but would not
>>>>>> that be enough to have to "null-able" files that do actually
>>>>>> sound somewhat different?
>>>>>>
>>>>>> I guess what I'm saying is that since each sample in a musical
>>>>>> track or full song file doesn't represent a pure, simple set of
>>>>>> content like a sample of a sine wave would - there's a whole
>>>>>> world of harmonic structure in each sample of a song file, and
>>>>>> I think (although I'll admit - I can't "prove") that there is
>>>>>> plenty of room for some variables between the first bit & the
>>>>>> last bit while still allowing for a null test to be successful.
>>>>>>
>>>>>> No? Am I wacked out of my mind?
>>>>>>
>>>>>> Neil
>>>>>>
>>>>
>>>
>>
>>
>
|
|
|
Re: (No subject)...What's up inder the hood? [message #77337 is a reply to message #77333] |
Fri, 22 December 2006 18:14 |
LaMont
Messages: 828 Registered: October 2005
|
Senior Member |
|
|
Dedric, my simple test is simple..
Using the same audio interface, with the same stereo file..null-ed to zero..No
eq, for fx. Master fader on zero..
Nuendo, Pro-Tools -Mpowered(native)... yields a sonic difference that I have
referenced before.. The sound coming from PT-M has a nice top end , where
as Neundo has a nice flatter sound quality.
Same audio interface. M-audio 410..Using Mackies & Blue-Sky pro monitors..
Same test at the big room..PT-HD & Neundo Logic Audio(macG5-Dual) Using the
192 interface.
Same results..But adding Logic audio's sound ..(Broad, thick)
Somethings going on.
Chucks post about how paris handles audio is a theory..Only Edmund can truly
give us the goods on what's really what..
I disagree that manufactuers don;t set out o put a sonic print on their products.
I think they do.
I have been fortunate to work on some digital mixers and I can tell you that
each one has their own sound. The Sony Dmx-100 was modeled after SSL 4000g
(like it's Big Brother).And you what? That board (Dmx-100) sound very warm
and it's eq tries to behave and sound just like an SSL.. Unlike he Yamaha
Dm2000(version 1.x) which has a very Clean, neutral sound..However, some
complained that it was tooo Vanila and thus, Yamaha add a version 2.0 which
added Vintage type Eq's, modeled analog input gain saturation fx too give
the user a choice Btw Clean and Neutral vs sonic Character.
So, if digital conoles can be given a sonic character, why not a software
mixer?
The truth is, there are some folks who want a neutral mixer and then there
are others who want a sonic footprint imparted. and these can be coded in
the digital realm.
The apllies with the manufactuers. They too have their vision on what They
think and want their product to sound.
I love reading on gearslutz the posts from Plugin developers and their interpretations
and opinions about what makes their Neve 1073 Eq better and what goes into
making their version sound like it does.. Each Developer has a different
vision as to what the Neve 1073 should sound like. And yet they all sound
good , but slightly different.
You stated that you use Vegas. Well as you know, Vegas has a very generic
sound..Just plain and simple. But, i bet you can tell the difference on
your system when you play that same file in Neundo (No, fx, eq, null-edzerro)..
???
"Dedric Terry" <dedric@echomg.com> wrote:
>Lamont - what is the output chain you are using for each app when comparing
>the file in Nuendo
>vs ProTools? On the same PC, I presume (and is this PT HD or M-Powered?)?
>Since these can't use the same output driver, you would have to depend on
>the D/A being
>the same, but clocking will be different unless you have a master clock,
and
>both interfaces
>are locking with the same accuracy. This was one of the issues that came
up
>at Lynn Fuston's
>D/A converter shootout - when do you lock to external clock and incur the
>resulting jitter,
>and when do you trust the internal clock - and if you do lock externally,
>how good is the PLL
>in the slave device? These issues can cause audible changes in the top
end
>that have nothing to do
>with the software itself. If you say that PTHD through the same converter
>output as Nuendo (via? RME?
>Lynx?) using the same master clock, sounds different playing a single audio
>file, then I take your word
>for it. I can't tell you why that is happening - only that an audible
>difference really shouldn't happen due
>to the software alone - not with a single audio file, esp. since I've heard
>and seen PTHD audio cancel with
>native DAWs. Just passing a single 16 or 24 bit track down the buss to
the
>output driver should
>be, and usually is, completely transparent, bit for bit.
>
>The same audio file played through the same converters should only sound
>different if something in
>the chain is different - be it clocking, gain or some degree of unintended,
>errant dsp processing. Every DAW should
>pass a single audio file without altering a single bit. That's a basic level
>of accuracy we should always
>expect of any DAW. If that accuracy isn't there, you can be sure a heavy
>mix will be altered in ways you
>didn't intend, even though you would end up mixing with that factor in place
>(e.g. you still mix for what
>you want to hear regardless of what the platform does to each audio track
or
>channel).
>
>In fact you should be able to send a stereo audio track out SPDIF or
>lightpipe to another DAW, record it
>bring the recorded file back in, line them up to the first bit, and have
>them cancel on and inverted phase
>test. I did this with Nuendo and Cubase 4 on separate machines just to
be
>sure my master clocking and
>slave sync was accurate - it worked perfectly.
>
>Also be sure there isn't a variation in the gain even by 0.1 dB between
the
>two. There shouldn't
>and I wouldn't expect there to be one. Also could PT be set for a different
>pan law? Shouldn't make a
>difference even if comparing two mono panned files to their stereo
>interleaved equivalent, but for sake
>of completeness it's worth checking as well. A variation in the output
>chain, be it drivers, audio card
>card, or converters would be the most likely culprit here.
>
>The reason DAW manufacturers wouldn't add any sonic "character"
>intentionally is that the
>ultimate goal from day one with recording has been to accurately reproduce
>what we hear.
>We developed a musical penchant for sonic character because the hardware
>just wasn't accurate,
>and what it did often sent us down new creative paths - even if by force
-
>and we decided it was
>preferred that way.
>
>Your point about what goes into the feature presets to sell synths is right
>for sure, but synths are about
>character and getting that "perfect piano" or crystal clear bell pad, or
fat
>punchy bass without spending
>a mint on development, adding 50G onboard sample libraries, or costing $15k,
>so what they
>lack in actual synthesis capabilities, they make up with EQ and effects
on
>the output. That's been the case
>for years, at least since we had effects on synths at least. But even with
>modern synths such as the Fantom,
>Tritons, etc, which are great synths all around, of course the coolest,
>widest and biggest patches
>will make the biggest impression - so in come the EQs, limiters, comps,
>reverbs, chorus, etc. The best
>way to find out if a synth is really good is to bypass all effects and see
>what happens. Most are pretty
>good these days, but about half the time, there are presets that fall
>completely flat in fx bypass.
>
>DAWs aren't designed to put a sonic fingerprint on a sound the way synths
>are - they are designed
>to *not* add anything - to pass through what we create as users, with no
>alteration (or as little as possible)
>beyond what we add with intentional processing (EQ, comps, etc). Developers
>would find no pride
>in hearing that their DAW sounds anything different than whatever is being
>played back in it,
>and the concept is contrary to what AES and IEEE proceedings on the issue
>propose in general
>digital audio discussions, white papers, etc.
>
>What ID ended up doing with Paris (at least from what I gather per Chuck's
>findings - so correct me if I'm missing part of the equation Chuck),
>is drop the track gain by 20dB or so, then added it back at the master buss
>to create the effect of headroom (probably
>because the master buss is really summing on the card, and they have more
>headroom there than on the tracks
>where native plugins might be used). I don't know if Paris passed 32-bit
>float files to the EDS card, but sort of
>doubt it. I think Chuck has clarified this at one point, but don't recall
>the answer.
>
>Also what Paris did is use a greater bit depth on the hardware than ProTools
>did - at the time PT was just
>bring Mix+ systems to market, or they had been out for a year or two (if
I
>have my timeline right) - they
>were 24-bit fixed all the way through. Logic and Cubase were native DAWs,
>but native was still too slow
>to compete with hardware hybrids. Paris trumped them all by running 32-bit
>float natively (not new really, but
>better than sticking to 24-bit) and 56 or so bits in hardware instead of
>going to Motorola DSPs at 24.
>The onboard effects were also a step up from anything out there, so the
demo
>did sound good.
>I don't recall which, but one of the demos, imho, wasn't so good (some
>sloppy production and
>vocals in spots, IIRC), so I only listened to it once. ;-)
>
>Coupled with the gain drop and buss makeup, this all gave it a "headroom"
no
>one else had. With very nice
>onboard effects, Paris jumped ahead of anything else out there easily, and
>still respectably holds its' own today
>in that department.
>
>Most demos I hear (when I listen to them) vary in quality, usually not so
>great in some area. But if a demo does
>sound great, then it at least says that the product is capable of at least
>that level of performance, and it can
>only help improve a prospective buyer's impression of it.
>
>Regards,
>Dedric
>
>"LaMont " <jjdpro@ameritech.net> wrote in message news:458c14c0$1@linux...
>>
>> Dedric good post..
>>
>> However, I have PT-M-Powered/M-audio 410 interface for my laptop and it
>> has
>> that same sound (no eq, zero fader) that HD does. I know their use the
>> same
>> 48 bit fix mixer. I load up the same file in Nuendo (no eq, zero
>> fader)..results.
>> different sonic character.
>>
>> PT having a top end touch..Nuendo, nice smooth(flat) sound. And I'm just
>> taking about a stereo wav file nulled with no eq..nothing ..zilch..nada..
>>
>> Now, there are devices (keyboards, dum machines) on the market today that
>> have a Master Buss Compressor and EQ set to on with the top end notched
>> up.
>> Why? because it gives their product an competitive advantageover the
>> competition..
>> Ex: Yahama's Motif ES, Akai's MPC 1000, 2500, Roland's Fantom.
>>
>> So, why would'nt a DAW manufactuer code in an extra (ooommf) to make their
>> DAW sound better. Especially, given the "I hate Digtal Summing" crowd?
>> And,
>> If I'm a DAW manufactuer, what would give my product a sonic edge over
the
>> competition?
>>
>> We live in the "louder is better" audio world these days, so a DAW that
>> can
>> catch my attention 'sonically" will probaly will get the sell. That's
what
>> happend to me back in 1997 when I heard Paris. I was floored!!! Still
to
>> this day, nothing has floored me like that "Road House Blues Demo" I heard
>> on Paris.
>>
>> Was it the hardware ? was it the software. I remember talking with Edmund
>> at the 2000 winter Namm, and told me that he & Steve set out to reproduce
>> the sonics of big buck analog board (eq's) and all.. And, summing was
a
>> big
>> big issue for them because they (ID) thought that nobody has gotten
>> it(summing)
>> right. And by right, they meant, behaved like a console with a wide lane
>> for all of those tracks..
>>
>>
>>
>>
>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>"LaMont" <jjdpro@ameritech.net> wrote in message news:458be8d5$1@linux...
>>>>
>>>> Okay...
>>>> I guess what I'm saying is this:
>>>>
>>>> -Is it possible that diferent DAW manufactuers "code" their app
>>>> differently
>>>> for sound results.
>>>
>>>Of course it is *possible* to do this, but only if the DAW has a specific
>>
>>>sound shaping purpose
>>>beyond normal summing/mixing. Users talk about wanting developers to
add
>> a
>>>"Neve sound" or "API sound" option to summing engines,
>>>but that's really impractical given the amount of dsp required to make
a
>>
>>>decent emulation (with convolution, dynamic EQ functions,
>>>etc). For sake of not eating up all cpu processing, that could likely
>>>only
>>
>>>surface as is a built in EQ, which
>>>no one wants universally in summing, and anyone can add at will already.
>>>
>>>So it hasn't happened yet and isn't likely to as it detours from the basic
>>
>>>tenant of audio recording - recreate what comes in as
>>>accurately as possible.
>>>
>>>What Digi did in recoding their summing engine was try to recover some
>>>of the damage done by the 24-bit buss in Mix systems. Motorola 56k dsps
>> are
>>>24-bit fixed point chips and I think
>>>the new generation (321?) still is, but they use double words now for
>>>48-bits). And though plugins could process at 48-bit by
>>>doubling up and using upper and lower 24-bit words for 48-bit outputs,
the
>>
>>>buss
>>>between chips was 24-bits, so they had to dither to 24-bits after every
>>
>>>plugin. The mixer (if I recall correctly) also
>>>had a 24-bit buss, so what Digi did is to add a dither stage to the mixer
>> to
>>>prevent this
>>>constant truncation of data. 24-bits isn't enough to cover summing for
>> more
>>>than a few tracks without
>>>losing information in the 16-bit world, and in the 24-bit world some
>>>information will be lost, at least at the lowest levels.
>>>
>>>Adding a dither stage (though I think they did more than that - perhaps
>>
>>>implement a 48-bit double word stage as well),
>>>simply smoothed over the truncation that was happening, but it didn't
>>>solve
>>
>>>the problem, so with HD
>>>they went to a double-word path - throughout I believe, including the
path
>>
>>>between chips. I believe the chips
>>>are still 24-bit, but by doubling up the processing (yes at a cost of
>>>twice
>>
>>>the overhead), they get a 48-bit engine.
>>>This not only provided better headroom, but greater resolution. Higher
>> bit
>>>depths subdivide the amplitude with greater resolution, and that's
>>>really where we get the definition of dynamic range - by lowering the
>>>signal
>>
>>>to quantization noise ratio.
>>>
>>>With DAWs that use 32-bit floating point math all the way through, the
>>>only
>>
>>>reason for altering the summing
>>>is by error, and that's an error that would actually be hard to make and
>> get
>>>past a very basic alpha stage of testing.
>>>There is a small difference in fixed point math and floating point math,
>> or
>>>at least a theoretical difference in how it affects audio
>>>in certain cases, but not necessarily in the result for calculating gain
>> in
>>>either for the same audio file. Where any differences might show up is
>>
>>>complicated, and I believe only appear at levels below 24-bit (or in
>>>headroom with tracks pushed beyond 0dBFS), or when/if
>>>there areany differences in where each amplitude level is quantized.
>>>
>>>Obviously there can be differences if the DAW has to use varying bit
>>>depths
>>
>>>throughout a single summing path to accomodate hardware
>>>as well as software summing, since there may be truncation or rounding
>>>along
>>
>>>the way, but that impacts the lowest bit
>>>level, and hence - spacial reproduction, reverb tails perhaps, and
>>>"depth",
>>
>>>not the levels most music so the differences are most
>>>often more subtle than not. But most modern DAWs have eliminated those
>>
>>>"rough edges" in the math by increasing the bit depth to accomodate normal
>>
>>>summing required for mixing audio.
>>>
>>>So with Lynn's unity gain summing test (A files on the CD I believe),
DAWs
>>
>>>were never asked to sum beyond 24-bits,
>>>at least not on the upper end of the dynamic range, so everything that
>>>could
>>
>>>represent 24-bits accurately would cancel. The only ones
>>>that didn't were ones that had a different bit depth and/or gain structure
>>
>>>whether hybrid or native
>>>(e.g. Paris' subtracting 20dB from tracks and adding it to the buss).
In
>>
>>>this case, PTHD cancelled (when I tested it) with
>>>Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed
vs.
>>
>>>32-bit float wasn't a factor.
>>>
>>>When trying other tests, even when adding and subtracting gain, Nuendo,
>>
>>>Sequoia and Sonar cancel - both audibly and
>>>visually at inaudible levels, which only proves that one isn't making
an
>>
>>>error when calculating basic gain. Since a dB is well defined,
>>>and the math to add gain is simple, they shouldn't. The fact that they
>> all
>>>use 32-bit float all the way through eliminates a difference
>>>in data structure as well, and this just verifies that. There was a time
>>
>>>that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor went,
>>>but it's 32-bit float all the way through now just as Sonar,
>>>Nuendo/Cubase,
>>
>>>Samplitude/Sequoia, DP, Audition (I presume at least).
>>>I don't know what Acid or Live use. Saw promotes a fixed point engine,
>> but
>>>I don't know if it is still 24-bit, or now 48 bit.
>>>That was an intentional choice by the developer, but he's the only one
I
>>
>>>know of that stuck with 24-bit for summing
>>>intentionally, esp. after the Digi Mix system mixer incident.
>>>
>>>Long answer, but to sum up, it is certainly physically *possible* for
a
>>
>>>developer to code something differently intentionally, but not
>>>in reality likely since it would be breaking some basic fixed point or
>>>floating point math rules. Where the differences really
>>>showed up in the past is with PT Mix systems where the limitation was
>>>really
>>
>>>significant - e.g. 24 bit with truncation at several stages.
>>>
>>>That really isn't such an issue anymore. Given the differences in
>>>workflow,
>>
>>>missing something in workflow or layout differences
>>>is easy enough to do (e.g. Sonar doesn't have group and busses the way
>>>Nuendo does, as it's outputs are actually driver outputs,
>>>not software busses, so in Sonar, busses are actually outputs, and sub
>>>busses are actually busses in Nuendo. There are no,
>>>or at least I haven't found the equivalent of a Nuendo group in Sonar
-
>> that
>>>affects the results of some tests (though not basic
>>>summing) if not taken into account, but when taken into account, they
work
>>
>>>exactly the same way).
>>>
>>>So at least when talking about apps with 32-bit float all the way through,
>>
>>>it's safe to say (since it has been proven) that summing isn't different
>>
>>>unless
>>>there is an error somewhere, or variation in how the user duplicates the
>>
>>>same mix in two different apps.
>>>
>>>Imho, that's actually a very good thing - approaching a more consistent
>>
>>>basis for recording and mixing from which users can make all
>>>of the decisions as to how the final product will sound and not be
>>>required
>>
>>>to decide when purchasing a pricey console, and have to
>>>focus their business on clients who want "that sound". I believe we are
>>
>>>actually closer to the pure definition of recording now than
>>>we once were.
>>>
>>>Regards,
>>>Dedric
>>>
>>>
>>>>
>>>> I the answer is yes, then,the real task is to discover or rather
>>>> un-cover
>>>> what's say: Motu's vision of summing, versus Digidesign, versus
>>>> Steinberg
>>>> and so on..
>>>>
>>>> What's under the hood. To me and others,when Digi re-coded their summing
>>>> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
>>
>>>> bump.
>>>> Where as Steinberg's summing is very neutral.
>>>>
>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>Hi Neil,
>>>>>
>>>>>Jamie is right. And you aren't wacked out - you are thinking this
>>>>>through
>>>>
>>>>>in a reasonable manner, but coming to the wrong
>>>>>conclusion - easy to do given how confusing digital audio can be. Each
>>>> word
>>>>>represents an amplitude
>>>>>point on a single curve that is changing over time, and can vary with
>> a
>>>>
>>>>>speed up to the Nyquist frequency (as Jamie described).
>>>>>The complex harmonic content we hear is actually the frequency
>>>>>modulation
>>>> of
>>>>>a single waveform,
>>>>>that over a small amount of time creates the sound we translate - we
>>>>>don't
>>>>
>>>>>really hear a single sample at a time,
>>>>>but thousands of samples at a time (1 sample alone could at most
>>>>>represent
>>>> a
>>>>>single positive or negative peak
>>>>>of a 22,050Hz waveform).
>>>>>
>>>>>If one bit doesn't cancel, esp. if it's a higher order bit than number
>> 24,
>>>>
>>>>>you may hear, and will see that easily,
>>>>>and the higher the bit in the dynamic range (higher order) the more
>>>>>audible
>>>>
>>>>>the difference.
>>>>>Since each bit is 6dB of dynamic range, you can extrapolate how "loud"
>>
>>>>>that
>>>>
>>>>>bit's impact will be
>>>>>if there is a variation.
>>>>>
>>>>>Now, obviously if we are talking about 1 sample in a 44.1k rate song,
>> then
>>>>
>>>>>it simply be a
>>>>>click (only audible if it's a high enough order bit) instead of an
>>>>>obvious
>>>>
>>>>>musical difference, but that should never
>>>>>happen in a phase cancellation test between identical files higher than
>>>> bit
>>>>>24, unless there are clock sync problems,
>>>>>driver issues, or the DAW is an early alpha version. :-)
>>>>>
>>>>>By definition of what DAWs do during playback and record, every audio
>>
>>>>>stream
>>>>
>>>>>has the same point in time (judged by the timeline)
>>>>>played back sample accurately, one word at a time, at whatever sample
>>
>>>>>rate
>>>>
>>>>>we are using. A phase cancellation test uses that
>>>>>fact to compare two audio files word for word (and hence bit for bit
>>>>>since
>>>>
>>>>>each bit of a 24-bit word would
>>>>>be at the same bit slot in each 24-bit word). Assuming they are aligned
>>>> to
>>>>>the same start point, sample
>>>>>accurately, and both are the same set of sample words at each sample
>>>>>point,
>>>>
>>>>>bit for bit, and one is phase inverted,
>>>>>they will cancel through all 24 bits. For two files to cancel
>>>>>completely
>>>>
>>>>>for the duration of the file, each and every bit in each word
>>>>>must be the exact opposite of that same bit position in a word at the
>> same
>>>>
>>>>>sample point. This is why zooming in on an FFT
>>>>>of the full difference file is valuable as it can show any differences
>> in
>>>>
>>>>>the lower order bits that wouldn't be audible. So even if
>>>>>there is no audible difference, the visual followup will show if the
two
>>>>
>>>>>files truly cancel even a levels below hearing, or
>>>>>outside of a frequency change that we will perceive.
>>>>>
>>>>>When they don't cancel, usually there will be way more than 1 bit
>>>>>difference - it's usually one or more bits in the words for
>>>>>thousands of samples. From a musical standpoint this is usually in
a
>>>>>frequency range (low freq, or high freq most often) - that will
>>>>>show up as the difference between them, and that usually happens due
to
>>>> some
>>>>>form of processing difference between the files,
>>>>>such as EQ, compression, frequency dependant gain changes, etc. That
is
>>>> what
>>>>>I believe you are thinking through, but when
>>>>>talking about straight summing with no gain change (or known equal gain
>>>>
>>>>>changes), we are only looking at linear, one for one
>>>>>comparisons between the two files' frequency representations.
>>>>>
>>>>>Regards,
>>>>>Dedric
>>>>>
>>>>>> Neil wrote:
>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>> The tests I did were completely blank down to -200 dB (far below
the
>>>>
>>>>>>>> last
>>>>>>>
>>>>>>>> bit). It's safe to say there is no difference, even in
>>>>>>>> quantization noise, which by technical rights, is considered below
>> the
>>>>
>>>>>>>> level
>>>>>>>
>>>>>>>> of "cancellation" in such tests.
>>>>>>>
>>>>>>> I'm not necessarily talking about just the first bit or the
>>>>>>> last bit, but also everything in between... what happens on bit
>>>>>>> #12, for example? Everything on bit #12 should be audible, but
>>>>>>> in an a/b test what if thre are differences in what bits #8
>>>>>>> through #12 sound like, but the amplutide is stll the same on
>>>>>>> both files at that point, you'll get a null, right? Extrapolate
>>>>>>> that out somewhat & let's say there are differences in bits #8
>>>>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>>>>> etc through 43,972... Now this is breaking things down well
>>>>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>>>>> dn't know of any way we could extract JUST that information
>>>>>>> from each file & play it back for an a/b test; but would not
>>>>>>> that be enough to have to "null-able" files that do actually
>>>>>>> sound somewhat different?
>>>>>>>
>>>>>>> I guess what I'm saying is that since each sample in a musical
>>>>>>> track or full song file doesn't represent a pure, simple set of
>>>>>>> content like a sample of a sine wave would - there's a whole
>>>>>>> world of harmonic structure in each sample of a song file, and
>>>>>>> I think (although I'll admit - I can't "prove") that there is
>>>>>>> plenty of room for some variables between the first bit & the
>>>>>>> last bit while still allowing for a null test to be successful.
>>>>>>>
>>>>>>> No? Am I wacked out of my mind?
>>>>>>>
>>>>>>> Neil
>>>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
>
|
|
|
Re: (No subject)...What's up inder the hood? [message #77340 is a reply to message #77337] |
Fri, 22 December 2006 18:25 |
Dedric Terry
Messages: 788 Registered: June 2007
|
Senior Member |
|
|
I can't tell you why you hear ProTools differently than Nuendo using a
single file.
There isn't any voodoo in the software, or hidden character enhancing dsp.
I'll see if
I can round up an M-Powered system to compare with next month.
For reference, everytime I open Sequoia I think I might hear a broader,
clean,
and almost flat (spectrum, not depth) sound, but I don't - it's the same as
Nuendo, fwiw.
Also I don't think what I was referring to was a theory from Chuck - I
believe that was what he
discovered in the code.
Digital mixers all have different preamps and converters. Unless you are
bypassing every
EQ and converter and going digital in and out to the same converter when
comparing, it would be hard
to say the mix engine itself sounds different than another mixer, but taken
as a whole, then
certainly they may very well sound different. In addition, hardware digital
mixers may use a variety of different paths between the I/O, channel
processing, and summing,
though most are pretty much software mixers on a single chip or set of dsps
similar to ProTools,
with I/O and a hardware surface attached.
I know it may be hard to separate the mix engine as software in either a
native DAW
or a digital mixer, from the hardware that translates the audio to something
we hear,
but that's what is required when comparing summing. The hardware can
significantly change
what we hear, so comparing digital mixers really isn't of as much interest
as comparing native
DAWs in that respect - unless you are looking to buy one of course.
Even though I know you think manufacturers are trying to add something to
give them an edge, I am 100%
sure that isn't the case - rather they are trying to add or change as little
as possible in order to give
them the edge. Their end of digital audio isn't about recreating the past,
but improving upon it.
As we've discussed and agreed before, the obsession with recreating
"vintage" technology is as much
fad as it is a valuable creative asset. There is no reason we shouldn't
have far superior hardware and software EQs and comps
than 20, 30 or 40 years ago. No reason at all, other than market demand,
but the majority of software, and new
hardware gear on the market has a vintage marketing tagline with it.
Companies will sell any bill of
goods if customers will buy it.
There's nothing unique about the summing in Nuendo, Cubase, Sequoia/Samp,
or Sonar, and it's pretty safe to include Logic and DP in that list as well.
One of the reasons I test
these things is to be sure my DAW isn't doing something wrong, or something
I don't know about.
Vegas - I use it for video conversions and have never done any critical
listening tests with it. What I have heard
briefly didn't sound any different. It certainly looks plain vanilla
though. What you are describing is exactly
what I would say about the GUIs of each of those apps, not that it means
anything. Just interesting.
That's one reason I listen eyes closed and double check with phase
cancellation tests and FFTs - I am
influenced creatively by the GUI to some degree. I actually like Cubase 4's
GUI better than Nuendo 3.2,
though there are only slight visual differences (some workflow differences
are a definite improvement for me though).
ProTools' GUI always made me want to write one dimensional soundtracks in
mono for public utilities, accounting offices
or the IRS while reading my discreet systems analysis textbook - it was also
grey. ;-)
Regards,
Dedric
"LaMont" <jjdpro@ameritech.net> wrote in message news:458c82fd$1@linux...
>
> Dedric, my simple test is simple..
> Using the same audio interface, with the same stereo file..null-ed to
> zero..No
> eq, for fx. Master fader on zero..
>
> Nuendo, Pro-Tools -Mpowered(native)... yields a sonic difference that I
> have
> referenced before.. The sound coming from PT-M has a nice top end , where
> as Neundo has a nice flatter sound quality.
> Same audio interface. M-audio 410..Using Mackies & Blue-Sky pro monitors..
>
> Same test at the big room..PT-HD & Neundo Logic Audio(macG5-Dual) Using
> the
> 192 interface.
> Same results..But adding Logic audio's sound ..(Broad, thick)
>
> Somethings going on.
>
> Chucks post about how paris handles audio is a theory..Only Edmund can
> truly
> give us the goods on what's really what..
>
> I disagree that manufactuers don;t set out o put a sonic print on their
> products.
> I think they do.
>
> I have been fortunate to work on some digital mixers and I can tell you
> that
> each one has their own sound. The Sony Dmx-100 was modeled after SSL 4000g
> (like it's Big Brother).And you what? That board (Dmx-100) sound very warm
> and it's eq tries to behave and sound just like an SSL.. Unlike he Yamaha
> Dm2000(version 1.x) which has a very Clean, neutral sound..However, some
> complained that it was tooo Vanila and thus, Yamaha add a version 2.0
> which
> added Vintage type Eq's, modeled analog input gain saturation fx too give
> the user a choice Btw Clean and Neutral vs sonic Character.
>
> So, if digital conoles can be given a sonic character, why not a software
> mixer?
> The truth is, there are some folks who want a neutral mixer and then there
> are others who want a sonic footprint imparted. and these can be coded in
> the digital realm.
> The apllies with the manufactuers. They too have their vision on what They
> think and want their product to sound.
>
> I love reading on gearslutz the posts from Plugin developers and their
> interpretations
> and opinions about what makes their Neve 1073 Eq better and what goes into
> making their version sound like it does.. Each Developer has a different
> vision as to what the Neve 1073 should sound like. And yet they all sound
> good , but slightly different.
>
> You stated that you use Vegas. Well as you know, Vegas has a very generic
> sound..Just plain and simple. But, i bet you can tell the difference on
> your system when you play that same file in Neundo (No, fx, eq,
> null-edzerro)..
> ???
>
>
> "Dedric Terry" <dedric@echomg.com> wrote:
>>Lamont - what is the output chain you are using for each app when
>>comparing
>
>>the file in Nuendo
>>vs ProTools? On the same PC, I presume (and is this PT HD or M-Powered?)?
>>Since these can't use the same output driver, you would have to depend on
>
>>the D/A being
>>the same, but clocking will be different unless you have a master clock,
> and
>>both interfaces
>>are locking with the same accuracy. This was one of the issues that came
> up
>>at Lynn Fuston's
>>D/A converter shootout - when do you lock to external clock and incur the
>
>>resulting jitter,
>>and when do you trust the internal clock - and if you do lock externally,
>
>>how good is the PLL
>>in the slave device? These issues can cause audible changes in the top
> end
>>that have nothing to do
>>with the software itself. If you say that PTHD through the same converter
>
>>output as Nuendo (via? RME?
>>Lynx?) using the same master clock, sounds different playing a single
>>audio
>
>>file, then I take your word
>>for it. I can't tell you why that is happening - only that an audible
>>difference really shouldn't happen due
>>to the software alone - not with a single audio file, esp. since I've
>>heard
>
>>and seen PTHD audio cancel with
>>native DAWs. Just passing a single 16 or 24 bit track down the buss to
> the
>>output driver should
>>be, and usually is, completely transparent, bit for bit.
>>
>>The same audio file played through the same converters should only sound
>
>>different if something in
>>the chain is different - be it clocking, gain or some degree of
>>unintended,
>
>>errant dsp processing. Every DAW should
>>pass a single audio file without altering a single bit. That's a basic
>>level
>
>>of accuracy we should always
>>expect of any DAW. If that accuracy isn't there, you can be sure a heavy
>
>>mix will be altered in ways you
>>didn't intend, even though you would end up mixing with that factor in
>>place
>
>>(e.g. you still mix for what
>>you want to hear regardless of what the platform does to each audio track
> or
>>channel).
>>
>>In fact you should be able to send a stereo audio track out SPDIF or
>>lightpipe to another DAW, record it
>>bring the recorded file back in, line them up to the first bit, and have
>
>>them cancel on and inverted phase
>>test. I did this with Nuendo and Cubase 4 on separate machines just to
> be
>>sure my master clocking and
>>slave sync was accurate - it worked perfectly.
>>
>>Also be sure there isn't a variation in the gain even by 0.1 dB between
> the
>>two. There shouldn't
>>and I wouldn't expect there to be one. Also could PT be set for a
>>different
>
>>pan law? Shouldn't make a
>>difference even if comparing two mono panned files to their stereo
>>interleaved equivalent, but for sake
>>of completeness it's worth checking as well. A variation in the output
>
>>chain, be it drivers, audio card
>>card, or converters would be the most likely culprit here.
>>
>>The reason DAW manufacturers wouldn't add any sonic "character"
>>intentionally is that the
>>ultimate goal from day one with recording has been to accurately reproduce
>
>>what we hear.
>>We developed a musical penchant for sonic character because the hardware
>
>>just wasn't accurate,
>>and what it did often sent us down new creative paths - even if by force
> -
>>and we decided it was
>>preferred that way.
>>
>>Your point about what goes into the feature presets to sell synths is
>>right
>
>>for sure, but synths are about
>>character and getting that "perfect piano" or crystal clear bell pad, or
> fat
>>punchy bass without spending
>>a mint on development, adding 50G onboard sample libraries, or costing
>>$15k,
>
>>so what they
>>lack in actual synthesis capabilities, they make up with EQ and effects
> on
>>the output. That's been the case
>>for years, at least since we had effects on synths at least. But even
>>with
>
>>modern synths such as the Fantom,
>>Tritons, etc, which are great synths all around, of course the coolest,
>
>>widest and biggest patches
>>will make the biggest impression - so in come the EQs, limiters, comps,
>
>>reverbs, chorus, etc. The best
>>way to find out if a synth is really good is to bypass all effects and see
>
>>what happens. Most are pretty
>>good these days, but about half the time, there are presets that fall
>>completely flat in fx bypass.
>>
>>DAWs aren't designed to put a sonic fingerprint on a sound the way synths
>
>>are - they are designed
>>to *not* add anything - to pass through what we create as users, with no
>
>>alteration (or as little as possible)
>>beyond what we add with intentional processing (EQ, comps, etc).
>>Developers
>
>>would find no pride
>>in hearing that their DAW sounds anything different than whatever is being
>
>>played back in it,
>>and the concept is contrary to what AES and IEEE proceedings on the issue
>
>>propose in general
>>digital audio discussions, white papers, etc.
>>
>>What ID ended up doing with Paris (at least from what I gather per Chuck's
>
>>findings - so correct me if I'm missing part of the equation Chuck),
>>is drop the track gain by 20dB or so, then added it back at the master
>>buss
>
>>to create the effect of headroom (probably
>>because the master buss is really summing on the card, and they have more
>
>>headroom there than on the tracks
>>where native plugins might be used). I don't know if Paris passed 32-bit
>
>>float files to the EDS card, but sort of
>>doubt it. I think Chuck has clarified this at one point, but don't recall
>
>>the answer.
>>
>>Also what Paris did is use a greater bit depth on the hardware than
>>ProTools
>
>>did - at the time PT was just
>>bring Mix+ systems to market, or they had been out for a year or two (if
> I
>>have my timeline right) - they
>>were 24-bit fixed all the way through. Logic and Cubase were native DAWs,
>
>>but native was still too slow
>>to compete with hardware hybrids. Paris trumped them all by running
>>32-bit
>
>>float natively (not new really, but
>>better than sticking to 24-bit) and 56 or so bits in hardware instead of
>
>>going to Motorola DSPs at 24.
>>The onboard effects were also a step up from anything out there, so the
> demo
>>did sound good.
>>I don't recall which, but one of the demos, imho, wasn't so good (some
>>sloppy production and
>>vocals in spots, IIRC), so I only listened to it once. ;-)
>>
>>Coupled with the gain drop and buss makeup, this all gave it a "headroom"
> no
>>one else had. With very nice
>>onboard effects, Paris jumped ahead of anything else out there easily, and
>
>>still respectably holds its' own today
>>in that department.
>>
>>Most demos I hear (when I listen to them) vary in quality, usually not so
>
>>great in some area. But if a demo does
>>sound great, then it at least says that the product is capable of at
>>least
>
>>that level of performance, and it can
>>only help improve a prospective buyer's impression of it.
>>
>>Regards,
>>Dedric
>>
>>"LaMont " <jjdpro@ameritech.net> wrote in message news:458c14c0$1@linux...
>>>
>>> Dedric good post..
>>>
>>> However, I have PT-M-Powered/M-audio 410 interface for my laptop and it
>
>>> has
>>> that same sound (no eq, zero fader) that HD does. I know their use the
>
>>> same
>>> 48 bit fix mixer. I load up the same file in Nuendo (no eq, zero
>>> fader)..results.
>>> different sonic character.
>>>
>>> PT having a top end touch..Nuendo, nice smooth(flat) sound. And I'm just
>>> taking about a stereo wav file nulled with no eq..nothing
>>> ..zilch..nada..
>>>
>>> Now, there are devices (keyboards, dum machines) on the market today
>>> that
>>> have a Master Buss Compressor and EQ set to on with the top end notched
>
>>> up.
>>> Why? because it gives their product an competitive advantageover the
>>> competition..
>>> Ex: Yahama's Motif ES, Akai's MPC 1000, 2500, Roland's Fantom.
>>>
>>> So, why would'nt a DAW manufactuer code in an extra (ooommf) to make
>>> their
>>> DAW sound better. Especially, given the "I hate Digtal Summing" crowd?
>
>>> And,
>>> If I'm a DAW manufactuer, what would give my product a sonic edge over
> the
>>> competition?
>>>
>>> We live in the "louder is better" audio world these days, so a DAW that
>
>>> can
>>> catch my attention 'sonically" will probaly will get the sell. That's
> what
>>> happend to me back in 1997 when I heard Paris. I was floored!!! Still
> to
>>> this day, nothing has floored me like that "Road House Blues Demo" I
>>> heard
>>> on Paris.
>>>
>>> Was it the hardware ? was it the software. I remember talking with
>>> Edmund
>>> at the 2000 winter Namm, and told me that he & Steve set out to
>>> reproduce
>>> the sonics of big buck analog board (eq's) and all.. And, summing was
> a
>>> big
>>> big issue for them because they (ID) thought that nobody has gotten
>>> it(summing)
>>> right. And by right, they meant, behaved like a console with a wide lane
>>> for all of those tracks..
>>>
>>>
>>>
>>>
>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>"LaMont" <jjdpro@ameritech.net> wrote in message
>>>>news:458be8d5$1@linux...
>>>>>
>>>>> Okay...
>>>>> I guess what I'm saying is this:
>>>>>
>>>>> -Is it possible that diferent DAW manufactuers "code" their app
>>>>> differently
>>>>> for sound results.
>>>>
>>>>Of course it is *possible* to do this, but only if the DAW has a
>>>>specific
>>>
>>>>sound shaping purpose
>>>>beyond normal summing/mixing. Users talk about wanting developers to
> add
>>> a
>>>>"Neve sound" or "API sound" option to summing engines,
>>>>but that's really impractical given the amount of dsp required to make
> a
>>>
>>>>decent emulation (with convolution, dynamic EQ functions,
>>>>etc). For sake of not eating up all cpu processing, that could likely
>
>>>>only
>>>
>>>>surface as is a built in EQ, which
>>>>no one wants universally in summing, and anyone can add at will already.
>>>>
>>>>So it hasn't happened yet and isn't likely to as it detours from the
>>>>basic
>>>
>>>>tenant of audio recording - recreate what comes in as
>>>>accurately as possible.
>>>>
>>>>What Digi did in recoding their summing engine was try to recover some
>>>>of the damage done by the 24-bit buss in Mix systems. Motorola 56k dsps
>>> are
>>>>24-bit fixed point chips and I think
>>>>the new generation (321?) still is, but they use double words now for
>>>>48-bits). And though plugins could process at 48-bit by
>>>>doubling up and using upper and lower 24-bit words for 48-bit outputs,
> the
>>>
>>>>buss
>>>>between chips was 24-bits, so they had to dither to 24-bits after every
>>>
>>>>plugin. The mixer (if I recall correctly) also
>>>>had a 24-bit buss, so what Digi did is to add a dither stage to the
>>>>mixer
>>> to
>>>>prevent this
>>>>constant truncation of data. 24-bits isn't enough to cover summing for
>>> more
>>>>than a few tracks without
>>>>losing information in the 16-bit world, and in the 24-bit world some
>>>>information will be lost, at least at the lowest levels.
>>>>
>>>>Adding a dither stage (though I think they did more than that - perhaps
>>>
>>>>implement a 48-bit double word stage as well),
>>>>simply smoothed over the truncation that was happening, but it didn't
>
>>>>solve
>>>
>>>>the problem, so with HD
>>>>they went to a double-word path - throughout I believe, including the
> path
>>>
>>>>between chips. I believe the chips
>>>>are still 24-bit, but by doubling up the processing (yes at a cost of
>
>>>>twice
>>>
>>>>the overhead), they get a 48-bit engine.
>>>>This not only provided better headroom, but greater resolution. Higher
>>> bit
>>>>depths subdivide the amplitude with greater resolution, and that's
>>>>really where we get the definition of dynamic range - by lowering the
>
>>>>signal
>>>
>>>>to quantization noise ratio.
>>>>
>>>>With DAWs that use 32-bit floating point math all the way through, the
>
>>>>only
>>>
>>>>reason for altering the summing
>>>>is by error, and that's an error that would actually be hard to make and
>>> get
>>>>past a very basic alpha stage of testing.
>>>>There is a small difference in fixed point math and floating point math,
>>> or
>>>>at least a theoretical difference in how it affects audio
>>>>in certain cases, but not necessarily in the result for calculating gain
>>> in
>>>>either for the same audio file. Where any differences might show up is
>>>
>>>>complicated, and I believe only appear at levels below 24-bit (or in
>>>>headroom with tracks pushed beyond 0dBFS), or when/if
>>>>there areany differences in where each amplitude level is quantized.
>>>>
>>>>Obviously there can be differences if the DAW has to use varying bit
>>>>depths
>>>
>>>>throughout a single summing path to accomodate hardware
>>>>as well as software summing, since there may be truncation or rounding
>
>>>>along
>>>
>>>>the way, but that impacts the lowest bit
>>>>level, and hence - spacial reproduction, reverb tails perhaps, and
>>>>"depth",
>>>
>>>>not the levels most music so the differences are most
>>>>often more subtle than not. But most modern DAWs have eliminated those
>>>
>>>>"rough edges" in the math by increasing the bit depth to accomodate
>>>>normal
>>>
>>>>summing required for mixing audio.
>>>>
>>>>So with Lynn's unity gain summing test (A files on the CD I believe),
> DAWs
>>>
>>>>were never asked to sum beyond 24-bits,
>>>>at least not on the upper end of the dynamic range, so everything that
>
>>>>could
>>>
>>>>represent 24-bits accurately would cancel. The only ones
>>>>that didn't were ones that had a different bit depth and/or gain
>>>>structure
>>>
>>>>whether hybrid or native
>>>>(e.g. Paris' subtracting 20dB from tracks and adding it to the buss).
> In
>>>
>>>>this case, PTHD cancelled (when I tested it) with
>>>>Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed
> vs.
>>>
>>>>32-bit float wasn't a factor.
>>>>
>>>>When trying other tests, even when adding and subtracting gain, Nuendo,
>>>
>>>>Sequoia and Sonar cancel - both audibly and
>>>>visually at inaudible levels, which only proves that one isn't making
> an
>>>
>>>>error when calculating basic gain. Since a dB is well defined,
>>>>and the math to add gain is simple, they shouldn't. The fact that they
>>> all
>>>>use 32-bit float all the way through eliminates a difference
>>>>in data structure as well, and this just verifies that. There was a
>>>>time
>>>
>>>>that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor went,
>>>>but it's 32-bit float all the way through now just as Sonar,
>>>>Nuendo/Cubase,
>>>
>>>>Samplitude/Sequoia, DP, Audition (I presume at least).
>>>>I don't know what Acid or Live use. Saw promotes a fixed point engine,
>>> but
>>>>I don't know if it is still 24-bit, or now 48 bit.
>>>>That was an intentional choice by the developer, but he's the only one
> I
>>>
>>>>know of that stuck with 24-bit for summing
>>>>intentionally, esp. after the Digi Mix system mixer incident.
>>>>
>>>>Long answer, but to sum up, it is certainly physically *possible* for
> a
>>>
>>>>developer to code something differently intentionally, but not
>>>>in reality likely since it would be breaking some basic fixed point or
>>>>floating point math rules. Where the differences really
>>>>showed up in the past is with PT Mix systems where the limitation was
>
>>>>really
>>>
>>>>significant - e.g. 24 bit with truncation at several stages.
>>>>
>>>>That really isn't such an issue anymore. Given the differences in
>>>>workflow,
>>>
>>>>missing something in workflow or layout differences
>>>>is easy enough to do (e.g. Sonar doesn't have group and busses the way
>>>>Nuendo does, as it's outputs are actually driver outputs,
>>>>not software busses, so in Sonar, busses are actually outputs, and sub
>>>>busses are actually busses in Nuendo. There are no,
>>>>or at least I haven't found the equivalent of a Nuendo group in Sonar
> -
>>> that
>>>>affects the results of some tests (though not basic
>>>>summing) if not taken into account, but when taken into account, they
> work
>>>
>>>>exactly the same way).
>>>>
>>>>So at least when talking about apps with 32-bit float all the way
>>>>through,
>>>
>>>>it's safe to say (since it has been proven) that summing isn't different
>>>
>>>>unless
>>>>there is an error somewhere, or variation in how the user duplicates the
>>>
>>>>same mix in two different apps.
>>>>
>>>>Imho, that's actually a very good thing - approaching a more consistent
>>>
>>>>basis for recording and mixing from which users can make all
>>>>of the decisions as to how the final product will sound and not be
>>>>required
>>>
>>>>to decide when purchasing a pricey console, and have to
>>>>focus their business on clients who want "that sound". I believe we are
>>>
>>>>actually closer to the pure definition of recording now than
>>>>we once were.
>>>>
>>>>Regards,
>>>>Dedric
>>>>
>>>>
>>>>>
>>>>> I the answer is yes, then,the real task is to discover or rather
>>>>> un-cover
>>>>> what's say: Motu's vision of summing, versus Digidesign, versus
>>>>> Steinberg
>>>>> and so on..
>>>>>
>>>>> What's under the hood. To me and others,when Digi re-coded their
>>>>> summing
>>>>> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
>>>
>>>>> bump.
>>>>> Where as Steinberg's summing is very neutral.
>>>>>
>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>Hi Neil,
>>>>>>
>>>>>>Jamie is right. And you aren't wacked out - you are thinking this
>>>>>>through
>>>>>
>>>>>>in a reasonable manner, but coming to the wrong
>>>>>>conclusion - easy to do given how confusing digital audio can be.
>>>>>>Each
>>>>> word
>>>>>>represents an amplitude
>>>>>>point on a single curve that is changing over time, and can vary with
>>> a
>>>>>
>>>>>>speed up to the Nyquist frequency (as Jamie described).
>>>>>>The complex harmonic content we hear is actually the frequency
>>>>>>modulation
>>>>> of
>>>>>>a single waveform,
>>>>>>that over a small amount of time creates the sound we translate - we
>
>>>>>>don't
>>>>>
>>>>>>really hear a single sample at a time,
>>>>>>but thousands of samples at a time (1 sample alone could at most
>>>>>>represent
>>>>> a
>>>>>>single positive or negative peak
>>>>>>of a 22,050Hz waveform).
>>>>>>
>>>>>>If one bit doesn't cancel, esp. if it's a higher order bit than number
>>> 24,
>>>>>
>>>>>>you may hear, and will see that easily,
>>>>>>and the higher the bit in the dynamic range (higher order) the more
>>>>>>audible
>>>>>
>>>>>>the difference.
>>>>>>Since each bit is 6dB of dynamic range, you can extrapolate how "loud"
>>>
>>>>>>that
>>>>>
>>>>>>bit's impact will be
>>>>>>if there is a variation.
>>>>>>
>>>>>>Now, obviously if we are talking about 1 sample in a 44.1k rate song,
>>> then
>>>>>
>>>>>>it simply be a
>>>>>>click (only audible if it's a high enough order bit) instead of an
>>>>>>obvious
>>>>>
>>>>>>musical difference, but that should never
>>>>>>happen in a phase cancellation test between identical files higher
>>>>>>than
>>>>> bit
>>>>>>24, unless there are clock sync problems,
>>>>>>driver issues, or the DAW is an early alpha version. :-)
>>>>>>
>>>>>>By definition of what DAWs do during playback and record, every audio
>>>
>>>>>>stream
>>>>>
>>>>>>has the same point in time (judged by the timeline)
>>>>>>played back sample accurately, one word at a time, at whatever sample
>>>
>>>>>>rate
>>>>>
>>>>>>we are using. A phase cancellation test uses that
>>>>>>fact to compare two audio files word for word (and hence bit for bit
>
>>>>>>since
>>>>>
>>>>>>each bit of a 24-bit word would
>>>>>>be at the same bit slot in each 24-bit word). Assuming they are
>>>>>>aligned
>>>>> to
>>>>>>the same start point, sample
>>>>>>accurately, and both are the same set of sample words at each sample
>>>>>>point,
>>>>>
>>>>>>bit for bit, and one is phase inverted,
>>>>>>they will cancel through all 24 bits. For two files to cancel
>>>>>>completely
>>>>>
>>>>>>for the duration of the file, each and every bit in each word
>>>>>>must be the exact opposite of that same bit position in a word at the
>>> same
>>>>>
>>>>>>sample point. This is why zooming in on an FFT
>>>>>>of the full difference file is valuable as it can show any differences
>>> in
>>>>>
>>>>>>the lower order bits that wouldn't be audible. So even if
>>>>>>there is no audible difference, the visual followup will show if the
> two
>>>>>
>>>>>>files truly cancel even a levels below hearing, or
>>>>>>outside of a frequency change that we will perceive.
>>>>>>
>>>>>>When they don't cancel, usually there will be way more than 1 bit
>>>>>>difference - it's usually one or more bits in the words for
>>>>>>thousands of samples. From a musical standpoint this is usually in
> a
>>>>>>frequency range (low freq, or high freq most often) - that will
>>>>>>show up as the difference between them, and that usually happens due
> to
>>>>> some
>>>>>>form of processing difference between the files,
>>>>>>such as EQ, compression, frequency dependant gain changes, etc. That
> is
>>>>> what
>>>>>>I believe you are thinking through, but when
>>>>>>talking about straight summing with no gain change (or known equal
>>>>>>gain
>>>>>
>>>>>>changes), we are only looking at linear, one for one
>>>>>>comparisons between the two files' frequency representations.
>>>>>>
>>>>>>Regards,
>>>>>>Dedric
>>>>>>
>>>>>>> Neil wrote:
>>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>> The tests I did were completely blank down to -200 dB (far below
> the
>>>>>
>>>>>>>>> last
>>>>>>>>
>>>>>>>>> bit). It's safe to say there is no difference, even in
>>>>>>>>> quantization noise, which by technical rights, is considered below
>>> the
>>>>>
>>>>>>>>> level
>>>>>>>>
>>>>>>>>> of "cancellation" in such tests.
>>>>>>>>
>>>>>>>> I'm not necessarily talking about just the first bit or the
>>>>>>>> last bit, but also everything in between... what happens on bit
>>>>>>>> #12, for example? Everything on bit #12 should be audible, but
>>>>>>>> in an a/b test what if thre are differences in what bits #8
>>>>>>>> through #12 sound like, but the amplutide is stll the same on
>>>>>>>> both files at that point, you'll get a null, right? Extrapolate
>>>>>>>> that out somewhat & let's say there are differences in bits #8
>>>>>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>>>>>> etc through 43,972... Now this is breaking things down well
>>>>>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>>>>>> dn't know of any way we could extract JUST that information
>>>>>>>> from each file & play it back for an a/b test; but would not
>>>>>>>> that be enough to have to "null-able" files that do actually
>>>>>>>> sound somewhat different?
>>>>>>>>
>>>>>>>> I guess what I'm saying is that since each sample in a musical
>>>>>>>> track or full song file doesn't represent a pure, simple set of
>>>>>>>> content like a sample of a sine wave would - there's a whole
>>>>>>>> world of harmonic structure in each sample of a song file, and
>>>>>>>> I think (although I'll admit - I can't "prove") that there is
>>>>>>>> plenty of room for some variables between the first bit & the
>>>>>>>> last bit while still allowing for a null test to be successful.
>>>>>>>>
>>>>>>>> No? Am I wacked out of my mind?
>>>>>>>>
>>>>>>>> Neil
>>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>>
>
|
|
|
Re: (No subject)...What's up inder the hood? [message #77343 is a reply to message #77340] |
Fri, 22 December 2006 23:39 |
LaMont
Messages: 828 Registered: October 2005
|
Senior Member |
|
|
Dedric, check out this post from our dear friend Fredo: Neundo Moderator:
Explaining how Steingberg's audio engine works. Note the trade-offs..Meaning,
Steinberg's way of coding an audio-engine 32bit float is different than say
Magix Samplitude:
Fredo
Administrative Moderator
Joined: 29 Dec 2004
Posts: 4213
Location: Belgium
Posted: Fri Dec 08, 2006 2:33 pm Post subject:
------------------------------------------------------------ --------------------
I think I see where the problem is.
In my scenario's I don't have any track that goes over 0dBfs, but I have
always lowered one channel to compensate with another.
So, I never whent over the 0dB fs limit.
Here's the explanation:
As soon as you go over 0dB, technically you are entering the domain of distortion.
In a 32bit FP mixer, that is not the case since there is unlimited headroom.
Now follow me step by step please - read this slow and make sure you understand
-
At the end of each "stage", there is an adder (a big calculator) which adds
all the numbers from the individual tracks that are routed to this "adder".
The numbers are kept in the 80-bit registers and then brought back to 32bit
float.
This process of bringing back the numbers from 80-bit (and more) to 32bit
is kept to an absolute minimum.
This adding/bringing back to 32bit is done at 3 places: After a plugin slot
(VST-specs for all plugin manufacturers) - Group Tracks and Master Tracks.
Now, as soon as you boost the volume above 0dB, you get more than 32bits.
Stay below 0dB and you will stay below 32 bits.
When the adders dump their results, the numbers are brought back from any
number of bits (say 60bit) to 32 bit float.
These numbers are simply truncated which results in distortion; that's the
noise/residue you find way down low.
There is an algortithm that protects us from additive errors - so these errors
can never come into the audible range.
So, as soon as you go over 0dB, you will see these kind of artifacts.
It is debatable if this needs to be dithered or not. The problem -still is-
that it is very difficult to dither in a Floating Point environment.
Fact remains that the error shouldn't be bigger than 2 to 3 LSB's.
Is this a problem?
In real world applictations: NO.
In scientific -unrealistic- tests (forcing the erro ): YES.
The alternative is having a Fixed point mixer, where you already would be
in trouble as soon as you boost one channel over 0dBfs. (or merge two files
that are @ 0dB)
Also, this problem will be pretty much gone as soon as we switch to the 64
bit engine.
For the record, the test where Jake hears "music" as residue must be flawed.
You should hear noise/distortion from square waves.
HTH
Fredo
"Dedric Terry" <dedric@echomg.com> wrote:
>I can't tell you why you hear ProTools differently than Nuendo using a
>single file.
>There isn't any voodoo in the software, or hidden character enhancing dsp.
>I'll see if
>I can round up an M-Powered system to compare with next month.
>
>For reference, everytime I open Sequoia I think I might hear a broader,
>clean,
>and almost flat (spectrum, not depth) sound, but I don't - it's the same
as
>Nuendo, fwiw.
>Also I don't think what I was referring to was a theory from Chuck - I
>believe that was what he
>discovered in the code.
>
>Digital mixers all have different preamps and converters. Unless you are
>bypassing every
>EQ and converter and going digital in and out to the same converter when
>comparing, it would be hard
>to say the mix engine itself sounds different than another mixer, but taken
>as a whole, then
>certainly they may very well sound different. In addition, hardware digital
>mixers may use a variety of different paths between the I/O, channel
>processing, and summing,
>though most are pretty much software mixers on a single chip or set of dsps
>similar to ProTools,
>with I/O and a hardware surface attached.
>
>I know it may be hard to separate the mix engine as software in either a
>native DAW
>or a digital mixer, from the hardware that translates the audio to something
>we hear,
>but that's what is required when comparing summing. The hardware can
>significantly change
>what we hear, so comparing digital mixers really isn't of as much interest
>as comparing native
>DAWs in that respect - unless you are looking to buy one of course.
>
>Even though I know you think manufacturers are trying to add something to
>give them an edge, I am 100%
>sure that isn't the case - rather they are trying to add or change as little
>as possible in order to give
>them the edge. Their end of digital audio isn't about recreating the past,
>but improving upon it.
>As we've discussed and agreed before, the obsession with recreating
>"vintage" technology is as much
>fad as it is a valuable creative asset. There is no reason we shouldn't
>have far superior hardware and software EQs and comps
>than 20, 30 or 40 years ago. No reason at all, other than market demand,
>but the majority of software, and new
>hardware gear on the market has a vintage marketing tagline with it.
>Companies will sell any bill of
>goods if customers will buy it.
>
>There's nothing unique about the summing in Nuendo, Cubase, Sequoia/Samp,
>or Sonar, and it's pretty safe to include Logic and DP in that list as well.
>One of the reasons I test
>these things is to be sure my DAW isn't doing something wrong, or something
>I don't know about.
>
>Vegas - I use it for video conversions and have never done any critical
>listening tests with it. What I have heard
>briefly didn't sound any different. It certainly looks plain vanilla
>though. What you are describing is exactly
>what I would say about the GUIs of each of those apps, not that it means
>anything. Just interesting.
>
>That's one reason I listen eyes closed and double check with phase
>cancellation tests and FFTs - I am
>influenced creatively by the GUI to some degree. I actually like Cubase
4's
>GUI better than Nuendo 3.2,
>though there are only slight visual differences (some workflow differences
>are a definite improvement for me though).
>
>ProTools' GUI always made me want to write one dimensional soundtracks in
>mono for public utilities, accounting offices
>or the IRS while reading my discreet systems analysis textbook - it was
also
>grey. ;-)
>
>Regards,
>Dedric
>
>"LaMont" <jjdpro@ameritech.net> wrote in message news:458c82fd$1@linux...
>>
>> Dedric, my simple test is simple..
>> Using the same audio interface, with the same stereo file..null-ed to
>> zero..No
>> eq, for fx. Master fader on zero..
>>
>> Nuendo, Pro-Tools -Mpowered(native)... yields a sonic difference that
I
>> have
>> referenced before.. The sound coming from PT-M has a nice top end , where
>> as Neundo has a nice flatter sound quality.
>> Same audio interface. M-audio 410..Using Mackies & Blue-Sky pro monitors..
>>
>> Same test at the big room..PT-HD & Neundo Logic Audio(macG5-Dual) Using
>> the
>> 192 interface.
>> Same results..But adding Logic audio's sound ..(Broad, thick)
>>
>> Somethings going on.
>>
>> Chucks post about how paris handles audio is a theory..Only Edmund can
>> truly
>> give us the goods on what's really what..
>>
>> I disagree that manufactuers don;t set out o put a sonic print on their
>> products.
>> I think they do.
>>
>> I have been fortunate to work on some digital mixers and I can tell you
>> that
>> each one has their own sound. The Sony Dmx-100 was modeled after SSL 4000g
>> (like it's Big Brother).And you what? That board (Dmx-100) sound very
warm
>> and it's eq tries to behave and sound just like an SSL.. Unlike he Yamaha
>> Dm2000(version 1.x) which has a very Clean, neutral sound..However, some
>> complained that it was tooo Vanila and thus, Yamaha add a version 2.0
>> which
>> added Vintage type Eq's, modeled analog input gain saturation fx too give
>> the user a choice Btw Clean and Neutral vs sonic Character.
>>
>> So, if digital conoles can be given a sonic character, why not a software
>> mixer?
>> The truth is, there are some folks who want a neutral mixer and then there
>> are others who want a sonic footprint imparted. and these can be coded
in
>> the digital realm.
>> The apllies with the manufactuers. They too have their vision on what
They
>> think and want their product to sound.
>>
>> I love reading on gearslutz the posts from Plugin developers and their
>> interpretations
>> and opinions about what makes their Neve 1073 Eq better and what goes
into
>> making their version sound like it does.. Each Developer has a different
>> vision as to what the Neve 1073 should sound like. And yet they all sound
>> good , but slightly different.
>>
>> You stated that you use Vegas. Well as you know, Vegas has a very generic
>> sound..Just plain and simple. But, i bet you can tell the difference
on
>> your system when you play that same file in Neundo (No, fx, eq,
>> null-edzerro)..
>> ???
>>
>>
>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>Lamont - what is the output chain you are using for each app when
>>>comparing
>>
>>>the file in Nuendo
>>>vs ProTools? On the same PC, I presume (and is this PT HD or M-Powered?)?
>>>Since these can't use the same output driver, you would have to depend
on
>>
>>>the D/A being
>>>the same, but clocking will be different unless you have a master clock,
>> and
>>>both interfaces
>>>are locking with the same accuracy. This was one of the issues that came
>> up
>>>at Lynn Fuston's
>>>D/A converter shootout - when do you lock to external clock and incur
the
>>
>>>resulting jitter,
>>>and when do you trust the internal clock - and if you do lock externally,
>>
>>>how good is the PLL
>>>in the slave device? These issues can cause audible changes in the top
>> end
>>>that have nothing to do
>>>with the software itself. If you say that PTHD through the same converter
>>
>>>output as Nuendo (via? RME?
>>>Lynx?) using the same master clock, sounds different playing a single
>>>audio
>>
>>>file, then I take your word
>>>for it. I can't tell you why that is happening - only that an audible
>>>difference really shouldn't happen due
>>>to the software alone - not with a single audio file, esp. since I've
>>>heard
>>
>>>and seen PTHD audio cancel with
>>>native DAWs. Just passing a single 16 or 24 bit track down the buss
to
>> the
>>>output driver should
>>>be, and usually is, completely transparent, bit for bit.
>>>
>>>The same audio file played through the same converters should only sound
>>
>>>different if something in
>>>the chain is different - be it clocking, gain or some degree of
>>>unintended,
>>
>>>errant dsp processing. Every DAW should
>>>pass a single audio file without altering a single bit. That's a basic
>>>level
>>
>>>of accuracy we should always
>>>expect of any DAW. If that accuracy isn't there, you can be sure a heavy
>>
>>>mix will be altered in ways you
>>>didn't intend, even though you would end up mixing with that factor in
>>>place
>>
>>>(e.g. you still mix for what
>>>you want to hear regardless of what the platform does to each audio track
>> or
>>>channel).
>>>
>>>In fact you should be able to send a stereo audio track out SPDIF or
>>>lightpipe to another DAW, record it
>>>bring the recorded file back in, line them up to the first bit, and have
>>
>>>them cancel on and inverted phase
>>>test. I did this with Nuendo and Cubase 4 on separate machines just to
>> be
>>>sure my master clocking and
>>>slave sync was accurate - it worked perfectly.
>>>
>>>Also be sure there isn't a variation in the gain even by 0.1 dB between
>> the
>>>two. There shouldn't
>>>and I wouldn't expect there to be one. Also could PT be set for a
>>>different
>>
>>>pan law? Shouldn't make a
>>>difference even if comparing two mono panned files to their stereo
>>>interleaved equivalent, but for sake
>>>of completeness it's worth checking as well. A variation in the output
>>
>>>chain, be it drivers, audio card
>>>card, or converters would be the most likely culprit here.
>>>
>>>The reason DAW manufacturers wouldn't add any sonic "character"
>>>intentionally is that the
>>>ultimate goal from day one with recording has been to accurately reproduce
>>
>>>what we hear.
>>>We developed a musical penchant for sonic character because the hardware
>>
>>>just wasn't accurate,
>>>and what it did often sent us down new creative paths - even if by force
>> -
>>>and we decided it was
>>>preferred that way.
>>>
>>>Your point about what goes into the feature presets to sell synths is
>>>right
>>
>>>for sure, but synths are about
>>>character and getting that "perfect piano" or crystal clear bell pad,
or
>> fat
>>>punchy bass without spending
>>>a mint on development, adding 50G onboard sample libraries, or costing
>>>$15k,
>>
>>>so what they
>>>lack in actual synthesis capabilities, they make up with EQ and effects
>> on
>>>the output. That's been the case
>>>for years, at least since we had effects on synths at least. But even
>>>with
>>
>>>modern synths such as the Fantom,
>>>Tritons, etc, which are great synths all around, of course the coolest,
>>
>>>widest and biggest patches
>>>will make the biggest impression - so in come the EQs, limiters, comps,
>>
>>>reverbs, chorus, etc. The best
>>>way to find out if a synth is really good is to bypass all effects and
see
>>
>>>what happens. Most are pretty
>>>good these days, but about half the time, there are presets that fall
>>>completely flat in fx bypass.
>>>
>>>DAWs aren't designed to put a sonic fingerprint on a sound the way synths
>>
>>>are - they are designed
>>>to *not* add anything - to pass through what we create as users, with
no
>>
>>>alteration (or as little as possible)
>>>beyond what we add with intentional processing (EQ, comps, etc).
>>>Developers
>>
>>>would find no pride
>>>in hearing that their DAW sounds anything different than whatever is being
>>
>>>played back in it,
>>>and the concept is contrary to what AES and IEEE proceedings on the issue
>>
>>>propose in general
>>>digital audio discussions, white papers, etc.
>>>
>>>What ID ended up doing with Paris (at least from what I gather per Chuck's
>>
>>>findings - so correct me if I'm missing part of the equation Chuck),
>>>is drop the track gain by 20dB or so, then added it back at the master
>>>buss
>>
>>>to create the effect of headroom (probably
>>>because the master buss is really summing on the card, and they have more
>>
>>>headroom there than on the tracks
>>>where native plugins might be used). I don't know if Paris passed 32-bit
>>
>>>float files to the EDS card, but sort of
>>>doubt it. I think Chuck has clarified this at one point, but don't recall
>>
>>>the answer.
>>>
>>>Also what Paris did is use a greater bit depth on the hardware than
>>>ProTools
>>
>>>did - at the time PT was just
>>>bring Mix+ systems to market, or they had been out for a year or two (if
>> I
>>>have my timeline right) - they
>>>were 24-bit fixed all the way through. Logic and Cubase were native DAWs,
>>
>>>but native was still too slow
>>>to compete with hardware hybrids. Paris trumped them all by running
>>>32-bit
>>
>>>float natively (not new really, but
>>>better than sticking to 24-bit) and 56 or so bits in hardware instead
of
>>
>>>going to Motorola DSPs at 24.
>>>The onboard effects were also a step up from anything out there, so the
>> demo
>>>did sound good.
>>>I don't recall which, but one of the demos, imho, wasn't so good (some
>>>sloppy production and
>>>vocals in spots, IIRC), so I only listened to it once. ;-)
>>>
>>>Coupled with the gain drop and buss makeup, this all gave it a "headroom"
>> no
>>>one else had. With very nice
>>>onboard effects, Paris jumped ahead of anything else out there easily,
and
>>
>>>still respectably holds its' own today
>>>in that department.
>>>
>>>Most demos I hear (when I listen to them) vary in quality, usually not
so
>>
>>>great in some area. But if a demo does
>>>sound great, then it at least says that the product is capable of at
>>>least
>>
>>>that level of performance, and it can
>>>only help improve a prospective buyer's impression of it.
>>>
>>>Regards,
>>>Dedric
>>>
>>>"LaMont " <jjdpro@ameritech.net> wrote in message news:458c14c0$1@linux...
>>>>
>>>> Dedric good post..
>>>>
>>>> However, I have PT-M-Powered/M-audio 410 interface for my laptop and
it
>>
>>>> has
>>>> that same sound (no eq, zero fader) that HD does. I know their use the
>>
>>>> same
>>>> 48 bit fix mixer. I load up the same file in Nuendo (no eq, zero
>>>> fader)..results.
>>>> different sonic character.
>>>>
>>>> PT having a top end touch..Nuendo, nice smooth(flat) sound. And I'm
just
>>>> taking about a stereo wav file nulled with no eq..nothing
>>>> ..zilch..nada..
>>>>
>>>> Now, there are devices (keyboards, dum machines) on the market today
>>>> that
>>>> have a Master Buss Compressor and EQ set to on with the top end notched
>>
>>>> up.
>>>> Why? because it gives their product an competitive advantageover the
>>>> competition..
>>>> Ex: Yahama's Motif ES, Akai's MPC 1000, 2500, Roland's Fantom.
>>>>
>>>> So, why would'nt a DAW manufactuer code in an extra (ooommf) to make
>>>> their
>>>> DAW sound better. Especially, given the "I hate Digtal Summing" crowd?
>>
>>>> And,
>>>> If I'm a DAW manufactuer, what would give my product a sonic edge over
>> the
>>>> competition?
>>>>
>>>> We live in the "louder is better" audio world these days, so a DAW that
>>
>>>> can
>>>> catch my attention 'sonically" will probaly will get the sell. That's
>> what
>>>> happend to me back in 1997 when I heard Paris. I was floored!!! Still
>> to
>>>> this day, nothing has floored me like that "Road House Blues Demo" I
>>>> heard
>>>> on Paris.
>>>>
>>>> Was it the hardware ? was it the software. I remember talking with
>>>> Edmund
>>>> at the 2000 winter Namm, and told me that he & Steve set out to
>>>> reproduce
>>>> the sonics of big buck analog board (eq's) and all.. And, summing was
>> a
>>>> big
>>>> big issue for them because they (ID) thought that nobody has gotten
>>>> it(summing)
>>>> right. And by right, they meant, behaved like a console with a wide
lane
>>>> for all of those tracks..
>>>>
>>>>
>>>>
>>>>
>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>"LaMont" <jjdpro@ameritech.net> wrote in message
>>>>>news:458be8d5$1@linux...
>>>>>>
>>>>>> Okay...
>>>>>> I guess what I'm saying is this:
>>>>>>
>>>>>> -Is it possible that diferent DAW manufactuers "code" their app
>>>>>> differently
>>>>>> for sound results.
>>>>>
>>>>>Of course it is *possible* to do this, but only if the DAW has a
>>>>>specific
>>>>
>>>>>sound shaping purpose
>>>>>beyond normal summing/mixing. Users talk about wanting developers to
>> add
>>>> a
>>>>>"Neve sound" or "API sound" option to summing engines,
>>>>>but that's really impractical given the amount of dsp required to make
>> a
>>>>
>>>>>decent emulation (with convolution, dynamic EQ functions,
>>>>>etc). For sake of not eating up all cpu processing, that could likely
>>
>>>>>only
>>>>
>>>>>surface as is a built in EQ, which
>>>>>no one wants universally in summing, and anyone can add at will already.
>>>>>
>>>>>So it hasn't happened yet and isn't likely to as it detours from the
>>>>>basic
>>>>
>>>>>tenant of audio recording - recreate what comes in as
>>>>>accurately as possible.
>>>>>
>>>>>What Digi did in recoding their summing engine was try to recover some
>>>>>of the damage done by the 24-bit buss in Mix systems. Motorola 56k dsps
>>>> are
>>>>>24-bit fixed point chips and I think
>>>>>the new generation (321?) still is, but they use double words now for
>>>>>48-bits). And though plugins could process at 48-bit by
>>>>>doubling up and using upper and lower 24-bit words for 48-bit outputs,
>> the
>>>>
>>>>>buss
>>>>>between chips was 24-bits, so they had to dither to 24-bits after every
>>>>
>>>>>plugin. The mixer (if I recall correctly) also
>>>>>had a 24-bit buss, so what Digi did is to add a dither stage to the
>>>>>mixer
>>>> to
>>>>>prevent this
>>>>>constant truncation of data. 24-bits isn't enough to cover summing
for
>>>> more
>>>>>than a few tracks without
>>>>>losing information in the 16-bit world, and in the 24-bit world some
>>>>>information will be lost, at least at the lowest levels.
>>>>>
>>>>>Adding a dither stage (though I think they did more than that - perhaps
>>>>
>>>>>implement a 48-bit double word stage as well),
>>>>>simply smoothed over the truncation that was happening, but it didn't
>>
>>>>>solve
>>>>
>>>>>the problem, so with HD
>>>>>they went to a double-word path - throughout I believe, including the
>> path
>>>>
>>>>>between chips. I believe the chips
>>>>>are still 24-bit, but by doubling up the processing (yes at a cost of
>>
>>>>>twice
>>>>
>>>>>the overhead), they get a 48-bit engine.
>>>>>This not only provided better headroom, but greater resolution. Higher
>>>> bit
>>>>>depths subdivide the amplitude with greater resolution, and that's
>>>>>really where we get the definition of dynamic range - by lowering the
>>
>>>>>signal
>>>>
>>>>>to quantization noise ratio.
>>>>>
>>>>>With DAWs that use 32-bit floating point math all the way through, the
>>
>>>>>only
>>>>
>>>>>reason for altering the summing
>>>>>is by error, and that's an error that would actually be hard to make
and
>>>> get
>>>>>past a very basic alpha stage of testing.
>>>>>There is a small difference in fixed point math and floating point math,
>>>> or
>>>>>at least a theoretical difference in how it affects audio
>>>>>in certain cases, but not necessarily in the result for calculating
gain
>>>> in
>>>>>either for the same audio file. Where any differences might show up
is
>>>>
>>>>>complicated, and I believe only appear at levels below 24-bit (or in
>>>>>headroom with tracks pushed beyond 0dBFS), or when/if
>>>>>there areany differences in where each amplitude level is quantized.
>>>>>
>>>>>Obviously there can be differences if the DAW has to use varying bit
>>>>>depths
>>>>
>>>>>throughout a single summing path to accomodate hardware
>>>>>as well as software summing, since there may be truncation or rounding
>>
>>>>>along
>>>>
>>>>>the way, but that impacts the lowest bit
>>>>>level, and hence - spacial reproduction, reverb tails perhaps, and
>>>>>"depth",
>>>>
>>>>>not the levels most music so the differences are most
>>>>>often more subtle than not. But most modern DAWs have eliminated those
>>>>
>>>>>"rough edges" in the math by increasing the bit depth to accomodate
>>>>>normal
>>>>
>>>>>summing required for mixing audio.
>>>>>
>>>>>So with Lynn's unity gain summing test (A files on the CD I believe),
>> DAWs
>>>>
>>>>>were never asked to sum beyond 24-bits,
>>>>>at least not on the upper end of the dynamic range, so everything that
>>
>>>>>could
>>>>
>>>>>represent 24-bits accurately would cancel. The only ones
>>>>>that didn't were ones that had a different bit depth and/or gain
>>>>>structure
>>>>
>>>>>whether hybrid or native
>>>>>(e.g. Paris' subtracting 20dB from tracks and adding it to the buss).
>> In
>>>>
>>>>>this case, PTHD cancelled (when I tested it) with
>>>>>Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed
>> vs.
>>>>
>>>>>32-bit float wasn't a factor.
>>>>>
>>>>>When trying other tests, even when adding and subtracting gain, Nuendo,
>>>>
>>>>>Sequoia and Sonar cancel - both audibly and
>>>>>visually at inaudible levels, which only proves that one isn't making
>> an
>>>>
>>>>>error when calculating basic gain. Since a dB is well defined,
>>>>>and the math to add gain is simple, they shouldn't. The fact that they
>>>> all
>>>>>use 32-bit float all the way through eliminates a difference
>>>>>in data structure as well, and this just verifies that. There was a
>>>>>time
>>>>
>>>>>that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor went,
>>>>>but it's 32-bit float all the way through now just as Sonar,
>>>>>Nuendo/Cubase,
>>>>
>>>>>Samplitude/Sequoia, DP, Audition (I presume at least).
>>>>>I don't know what Acid or Live use. Saw promotes a fixed point engine,
>>>> but
>>>>>I don't know if it is still 24-bit, or now 48 bit.
>>>>>That was an intentional choice by the developer, but he's the only one
>> I
>>>>
>>>>>know of that stuck with 24-bit for summing
>>>>>intentionally, esp. after the Digi Mix system mixer incident.
>>>>>
>>>>>Long answer, but to sum up, it is certainly physically *possible* for
>> a
>>>>
>>>>>developer to code something differently intentionally, but not
>>>>>in reality likely since it would be breaking some basic fixed point
or
>>>>>floating point math rules. Where the differences really
>>>>>showed up in the past is with PT Mix systems where the limitation was
>>
>>>>>really
>>>>
>>>>>significant - e.g. 24 bit with truncation at several stages.
>>>>>
>>>>>That really isn't such an issue anymore. Given the differences in
>>>>>workflow,
>>>>
>>>>>missing something in workflow or layout differences
>>>>>is easy enough to do (e.g. Sonar doesn't have group and busses the way
>>>>>Nuendo does, as it's outputs are actually driver outputs,
>>>>>not software busses, so in Sonar, busses are actually outputs, and sub
>>>>>busses are actually busses in Nuendo. There are no,
>>>>>or at least I haven't found the equivalent of a Nuendo group in Sonar
>> -
>>>> that
>>>>>affects the results of some tests (though not basic
>>>>>summing) if not taken into account, but when taken into account, they
>> work
>>>>
>>>>>exactly the same way).
>>>>>
>>>>>So at least when talking about apps with 32-bit float all the way
>>>>>through,
>>>>
>>>>>it's safe to say (since it has been proven) that summing isn't different
>>>>
>>>>>unless
>>>>>there is an error somewhere, or variation in how the user duplicates
the
>>>>
>>>>>same mix in two different apps.
>>>>>
>>>>>Imho, that's actually a very good thing - approaching a more consistent
>>>>
>>>>>basis for recording and mixing from which users can make all
>>>>>of the decisions as to how the final product will sound and not be
>>>>>required
>>>>
>>>>>to decide when purchasing a pricey console, and have to
>>>>>focus their business on clients who want "that sound". I believe we
are
>>>>
>>>>>actually closer to the pure definition of recording now than
>>>>>we once were.
>>>>>
>>>>>Regards,
>>>>>Dedric
>>>>>
>>>>>
>>>>>>
>>>>>> I the answer is yes, then,the real task is to discover or rather
>>>>>> un-cover
>>>>>> what's say: Motu's vision of summing, versus Digidesign, versus
>>>>>> Steinberg
>>>>>> and so on..
>>>>>>
>>>>>> What's under the hood. To me and others,when Digi re-coded their
>>>>>> summing
>>>>>> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
>>>>
>>>>>> bump.
>>>>>> Where as Steinberg's summing is very neutral.
>>>>>>
>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>Hi Neil,
>>>>>>>
>>>>>>>Jamie is right. And you aren't wacked out - you are thinking this
>>>>>>>through
>>>>>>
>>>>>>>in a reasonable manner, but coming to the wrong
>>>>>>>conclusion - easy to do given how confusing digital audio can be.
>>>>>>>Each
>>>>>> word
>>>>>>>represents an amplitude
>>>>>>>point on a single curve that is changing over time, and can vary with
>>>> a
>>>>>>
>>>>>>>speed up to the Nyquist frequency (as Jamie described).
>>>>>>>The complex harmonic content we hear is actually the frequency
>>>>>>>modulation
>>>>>> of
>>>>>>>a single waveform,
>>>>>>>that over a small amount of time creates the sound we translate -
we
>>
>>>>>>>don't
>>>>>>
>>>>>>>really hear a single sample at a time,
>>>>>>>but thousands of samples at a time (1 sample alone could at most
>>>>>>>represent
>>>>>> a
>>>>>>>single positive or negative peak
>>>>>>>of a 22,050Hz waveform).
>>>>>>>
>>>>>>>If one bit doesn't cancel, esp. if it's a higher order bit than number
>>>> 24,
>>>>>>
>>>>>>>you may hear, and will see that easily,
>>>>>>>and the higher the bit in the dynamic range (higher order) the more
>>>>>>>audible
>>>>>>
>>>>>>>the difference.
>>>>>>>Since each bit is 6dB of dynamic range, you can extrapolate how "loud"
>>>>
>>>>>>>that
>>>>>>
>>>>>>>bit's impact will be
>>>>>>>if there is a variation.
>>>>>>>
>>>>>>>Now, obviously if we are talking about 1 sample in a 44.1k rate song,
>>>> then
>>>>>>
>>>>>>>it simply be a
>>>>>>>click (only audible if it's a high enough order bit) instead of an
>>>>>>>obvious
>>>>>>
>>>>>>>musical difference, but that should never
>>>>>>>happen in a phase cancellation test between identical files higher
>>>>>>>than
>>>>>> bit
>>>>>>>24, unless there are clock sync problems,
>>>>>>>driver issues, or the DAW is an early alpha version. :-)
>>>>>>>
>>>>>>>By definition of what DAWs do during playback and record, every audio
>>>>
>>>>>>>stream
>>>>>>
>>>>>>>has the same point in time (judged by the timeline)
>>>>>>>played back sample accurately, one word at a time, at whatever sample
>>>>
>>>>>>>rate
>>>>>>
>>>>>>>we are using. A phase cancellation test uses that
>>>>>>>fact to compare two audio files word for word (and hence bit for bit
>>
>>>>>>>since
>>>>>>
>>>>>>>each bit of a 24-bit word would
>>>>>>>be at the same bit slot in each 24-bit word). Assuming they are
>>>>>>>aligned
>>>>>> to
>>>>>>>the same start point, sample
>>>>>>>accurately, and both are the same set of sample words at each sample
>>>>>>>point,
>>>>>>
>>>>>>>bit for bit, and one is phase inverted,
>>>>>>>they will cancel through all 24 bits. For two files to cancel
>>>>>>>completely
>>>>>>
>>>>>>>for the duration of the file, each and every bit in each word
>>>>>>>must be the exact opposite of that same bit position in a word at
the
>>>> same
>>>>>>
>>>>>>>sample point. This is why zooming in on an FFT
>>>>>>>of the full difference file is valuable as it can show any differences
>>>> in
>>>>>>
>>>>>>>the lower order bits that wouldn't be audible. So even if
>>>>>>>there is no audible difference, the visual followup will show if the
>> two
>>>>>>
>>>>>>>files truly cancel even a levels below hearing, or
>>>>>>>outside of a frequency change that we will perceive.
>>>>>>>
>>>>>>>When they don't cancel, usually there will be way more than 1 bit
>>>>>>>difference - it's usually one or more bits in the words for
>>>>>>>thousands of samples. From a musical standpoint this is usually in
>> a
>>>>>>>frequency range (low freq, or high freq most often) - that will
>>>>>>>show up as the difference between them, and that usually happens due
>> to
>>>>>> some
>>>>>>>form of processing difference between the files,
>>>>>>>such as EQ, compression, frequency dependant gain changes, etc. That
>> is
>>>>>> what
>>>>>>>I believe you are thinking through, but when
>>>>>>>talking about straight summing with no gain change (or known equal
>>>>>>>gain
>>>>>>
>>>>>>>changes), we are only looking at linear, one for one
>>>>>>>comparisons between the two files' frequency representations.
>>>>>>>
>>>>>>>Regards,
>>>>>>>Dedric
>>>>>>>
>>>>>>>> Neil wrote:
>>>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>>> The tests I did were completely blank down to -200 dB (far below
>> the
>>>>>>
>>>>>>>>>> last
>>>>>>>>>
>>>>>>>>>> bit). It's safe to say there is no difference, even in
>>>>>>>>>> quantization noise, which by technical rights, is considered below
>>>> the
>>>>>>
>>>>>>>>>> level
>>>>>>>>>
>>>>>>>>>> of "cancellation" in such tests.
>>>>>>>>>
>>>>>>>>> I'm not necessarily talking about just the first bit or the
>>>>>>>>> last bit, but also everything in between... what happens on bit
>>>>>>>>> #12, for example? Everything on bit #12 should be audible, but
>>>>>>>>> in an a/b test what if thre are differences in what bits #8
>>>>>>>>> through #12 sound like, but the amplutide is stll the same on
>>>>>>>>> both files at that point, you'll get a null, right? Extrapolate
>>>>>>>>> that out somewhat & let's say there are differences in bits #8
>>>>>>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>>>>>>> etc through 43,972... Now this is breaking things down well
>>>>>>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>>>>>>> dn't know of any way we could extract JUST that information
>>>>>>>>> from each file & play it back for an a/b test; but would not
>>>>>>>>> that be enough to have to "null-able" files that do actually
>>>>>>>>> sound somewhat different?
>>>>>>>>>
>>>>>>>>> I guess what I'm saying is that since each sample in a musical
>>>>>>>>> track or full song file doesn't represent a pure, simple set of
>>>>>>>>> content like a sample of a sine wave would - there's a whole
>>>>>>>>> world of harmonic structure in each sample of a song file, and
>>>>>>>>> I think (although I'll admit - I can't "prove") that there is
>>>>>>>>> plenty of room for some variables between the first bit & the
>>>>>>>>> last bit while still allowing for a null test to be successful.
>>>>>>>>>
>>>>>>>>> No? Am I wacked out of my mind?
>>>>>>>>>
>>>>>>>>> Neil
>>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
>
|
|
|
|
Re: (No subject)...What's up inder the hood? [message #77352 is a reply to message #77340] |
Sat, 23 December 2006 07:58 |
chuck duffy
Messages: 453 Registered: July 2005
|
Senior Member |
|
|
Hi Lamont,
I've posted this several times in the past, but here's the scoop. Edmund
did not write the summing code. It's deep within the DSP code running on
the ESP2 chips. It was written by some very talented guys at Ensoniq. I
really dig everything that Edmund and Stephen did, but the summing just isn't
part of it.
The stuff I posted is not really a theory. The PARIS mix engine source code
is freely available for download. Anyone with a little time, patience and
the ESP2 patent can clearly see what is going on. It's only a couple hundred
lines of code.
Chuck
"Dedric Terry" <dedric@echomg.com> wrote:
>I can't tell you why you hear ProTools differently than Nuendo using a
>single file.
>There isn't any voodoo in the software, or hidden character enhancing dsp.
>I'll see if
>I can round up an M-Powered system to compare with next month.
>
>For reference, everytime I open Sequoia I think I might hear a broader,
>clean,
>and almost flat (spectrum, not depth) sound, but I don't - it's the same
as
>Nuendo, fwiw.
>Also I don't think what I was referring to was a theory from Chuck - I
>believe that was what he
>discovered in the code.
>
>Digital mixers all have different preamps and converters. Unless you are
>bypassing every
>EQ and converter and going digital in and out to the same converter when
>comparing, it would be hard
>to say the mix engine itself sounds different than another mixer, but taken
>as a whole, then
>certainly they may very well sound different. In addition, hardware digital
>mixers may use a variety of different paths between the I/O, channel
>processing, and summing,
>though most are pretty much software mixers on a single chip or set of dsps
>similar to ProTools,
>with I/O and a hardware surface attached.
>
>I know it may be hard to separate the mix engine as software in either a
>native DAW
>or a digital mixer, from the hardware that translates the audio to something
>we hear,
>but that's what is required when comparing summing. The hardware can
>significantly change
>what we hear, so comparing digital mixers really isn't of as much interest
>as comparing native
>DAWs in that respect - unless you are looking to buy one of course.
>
>Even though I know you think manufacturers are trying to add something to
>give them an edge, I am 100%
>sure that isn't the case - rather they are trying to add or change as little
>as possible in order to give
>them the edge. Their end of digital audio isn't about recreating the past,
>but improving upon it.
>As we've discussed and agreed before, the obsession with recreating
>"vintage" technology is as much
>fad as it is a valuable creative asset. There is no reason we shouldn't
>have far superior hardware and software EQs and comps
>than 20, 30 or 40 years ago. No reason at all, other than market demand,
>but the majority of software, and new
>hardware gear on the market has a vintage marketing tagline with it.
>Companies will sell any bill of
>goods if customers will buy it.
>
>There's nothing unique about the summing in Nuendo, Cubase, Sequoia/Samp,
>or Sonar, and it's pretty safe to include Logic and DP in that list as well.
>One of the reasons I test
>these things is to be sure my DAW isn't doing something wrong, or something
>I don't know about.
>
>Vegas - I use it for video conversions and have never done any critical
>listening tests with it. What I have heard
>briefly didn't sound any different. It certainly looks plain vanilla
>though. What you are describing is exactly
>what I would say about the GUIs of each of those apps, not that it means
>anything. Just interesting.
>
>That's one reason I listen eyes closed and double check with phase
>cancellation tests and FFTs - I am
>influenced creatively by the GUI to some degree. I actually like Cubase
4's
>GUI better than Nuendo 3.2,
>though there are only slight visual differences (some workflow differences
>are a definite improvement for me though).
>
>ProTools' GUI always made me want to write one dimensional soundtracks in
>mono for public utilities, accounting offices
>or the IRS while reading my discreet systems analysis textbook - it was
also
>grey. ;-)
>
>Regards,
>Dedric
>
>"LaMont" <jjdpro@ameritech.net> wrote in message news:458c82fd$1@linux...
>>
>> Dedric, my simple test is simple..
>> Using the same audio interface, with the same stereo file..null-ed to
>> zero..No
>> eq, for fx. Master fader on zero..
>>
>> Nuendo, Pro-Tools -Mpowered(native)... yields a sonic difference that
I
>> have
>> referenced before.. The sound coming from PT-M has a nice top end , where
>> as Neundo has a nice flatter sound quality.
>> Same audio interface. M-audio 410..Using Mackies & Blue-Sky pro monitors..
>>
>> Same test at the big room..PT-HD & Neundo Logic Audio(macG5-Dual) Using
>> the
>> 192 interface.
>> Same results..But adding Logic audio's sound ..(Broad, thick)
>>
>> Somethings going on.
>>
>> Chucks post about how paris handles audio is a theory..Only Edmund can
>> truly
>> give us the goods on what's really what..
>>
>> I disagree that manufactuers don;t set out o put a sonic print on their
>> products.
>> I think they do.
>>
>> I have been fortunate to work on some digital mixers and I can tell you
>> that
>> each one has their own sound. The Sony Dmx-100 was modeled after SSL 4000g
>> (like it's Big Brother).And you what? That board (Dmx-100) sound very
warm
>> and it's eq tries to behave and sound just like an SSL.. Unlike he Yamaha
>> Dm2000(version 1.x) which has a very Clean, neutral sound..However, some
>> complained that it was tooo Vanila and thus, Yamaha add a version 2.0
>> which
>> added Vintage type Eq's, modeled analog input gain saturation fx too give
>> the user a choice Btw Clean and Neutral vs sonic Character.
>>
>> So, if digital conoles can be given a sonic character, why not a software
>> mixer?
>> The truth is, there are some folks who want a neutral mixer and then there
>> are others who want a sonic footprint imparted. and these can be coded
in
>> the digital realm.
>> The apllies with the manufactuers. They too have their vision on what
They
>> think and want their product to sound.
>>
>> I love reading on gearslutz the posts from Plugin developers and their
>> interpretations
>> and opinions about what makes their Neve 1073 Eq better and what goes
into
>> making their version sound like it does.. Each Developer has a different
>> vision as to what the Neve 1073 should sound like. And yet they all sound
>> good , but slightly different.
>>
>> You stated that you use Vegas. Well as you know, Vegas has a very generic
>> sound..Just plain and simple. But, i bet you can tell the difference
on
>> your system when you play that same file in Neundo (No, fx, eq,
>> null-edzerro)..
>> ???
>>
>>
>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>Lamont - what is the output chain you are using for each app when
>>>comparing
>>
>>>the file in Nuendo
>>>vs ProTools? On the same PC, I presume (and is this PT HD or M-Powered?)?
>>>Since these can't use the same output driver, you would have to depend
on
>>
>>>the D/A being
>>>the same, but clocking will be different unless you have a master clock,
>> and
>>>both interfaces
>>>are locking with the same accuracy. This was one of the issues that came
>> up
>>>at Lynn Fuston's
>>>D/A converter shootout - when do you lock to external clock and incur
the
>>
>>>resulting jitter,
>>>and when do you trust the internal clock - and if you do lock externally,
>>
>>>how good is the PLL
>>>in the slave device? These issues can cause audible changes in the top
>> end
>>>that have nothing to do
>>>with the software itself. If you say that PTHD through the same converter
>>
>>>output as Nuendo (via? RME?
>>>Lynx?) using the same master clock, sounds different playing a single
>>>audio
>>
>>>file, then I take your word
>>>for it. I can't tell you why that is happening - only that an audible
>>>difference really shouldn't happen due
>>>to the software alone - not with a single audio file, esp. since I've
>>>heard
>>
>>>and seen PTHD audio cancel with
>>>native DAWs. Just passing a single 16 or 24 bit track down the buss
to
>> the
>>>output driver should
>>>be, and usually is, completely transparent, bit for bit.
>>>
>>>The same audio file played through the same converters should only sound
>>
>>>different if something in
>>>the chain is different - be it clocking, gain or some degree of
>>>unintended,
>>
>>>errant dsp processing. Every DAW should
>>>pass a single audio file without altering a single bit. That's a basic
>>>level
>>
>>>of accuracy we should always
>>>expect of any DAW. If that accuracy isn't there, you can be sure a heavy
>>
>>>mix will be altered in ways you
>>>didn't intend, even though you would end up mixing with that factor in
>>>place
>>
>>>(e.g. you still mix for what
>>>you want to hear regardless of what the platform does to each audio track
>> or
>>>channel).
>>>
>>>In fact you should be able to send a stereo audio track out SPDIF or
>>>lightpipe to another DAW, record it
>>>bring the recorded file back in, line them up to the first bit, and have
>>
>>>them cancel on and inverted phase
>>>test. I did this with Nuendo and Cubase 4 on separate machines just to
>> be
>>>sure my master clocking and
>>>slave sync was accurate - it worked perfectly.
>>>
>>>Also be sure there isn't a variation in the gain even by 0.1 dB between
>> the
>>>two. There shouldn't
>>>and I wouldn't expect there to be one. Also could PT be set for a
>>>different
>>
>>>pan law? Shouldn't make a
>>>difference even if comparing two mono panned files to their stereo
>>>interleaved equivalent, but for sake
>>>of completeness it's worth checking as well. A variation in the output
>>
>>>chain, be it drivers, audio card
>>>card, or converters would be the most likely culprit here.
>>>
>>>The reason DAW manufacturers wouldn't add any sonic "character"
>>>intentionally is that the
>>>ultimate goal from day one with recording has been to accurately reproduce
>>
>>>what we hear.
>>>We developed a musical penchant for sonic character because the hardware
>>
>>>just wasn't accurate,
>>>and what it did often sent us down new creative paths - even if by force
>> -
>>>and we decided it was
>>>preferred that way.
>>>
>>>Your point about what goes into the feature presets to sell synths is
>>>right
>>
>>>for sure, but synths are about
>>>character and getting that "perfect piano" or crystal clear bell pad,
or
>> fat
>>>punchy bass without spending
>>>a mint on development, adding 50G onboard sample libraries, or costing
>>>$15k,
>>
>>>so what they
>>>lack in actual synthesis capabilities, they make up with EQ and effects
>> on
>>>the output. That's been the case
>>>for years, at least since we had effects on synths at least. But even
>>>with
>>
>>>modern synths such as the Fantom,
>>>Tritons, etc, which are great synths all around, of course the coolest,
>>
>>>widest and biggest patches
>>>will make the biggest impression - so in come the EQs, limiters, comps,
>>
>>>reverbs, chorus, etc. The best
>>>way to find out if a synth is really good is to bypass all effects and
see
>>
>>>what happens. Most are pretty
>>>good these days, but about half the time, there are presets that fall
>>>completely flat in fx bypass.
>>>
>>>DAWs aren't designed to put a sonic fingerprint on a sound the way synths
>>
>>>are - they are designed
>>>to *not* add anything - to pass through what we create as users, with
no
>>
>>>alteration (or as little as possible)
>>>beyond what we add with intentional processing (EQ, comps, etc).
>>>Developers
>>
>>>would find no pride
>>>in hearing that their DAW sounds anything different than whatever is being
>>
>>>played back in it,
>>>and the concept is contrary to what AES and IEEE proceedings on the issue
>>
>>>propose in general
>>>digital audio discussions, white papers, etc.
>>>
>>>What ID ended up doing with Paris (at least from what I gather per Chuck's
>>
>>>findings - so correct me if I'm missing part of the equation Chuck),
>>>is drop the track gain by 20dB or so, then added it back at the master
>>>buss
>>
>>>to create the effect of headroom (probably
>>>because the master buss is really summing on the card, and they have more
>>
>>>headroom there than on the tracks
>>>where native plugins might be used). I don't know if Paris passed 32-bit
>>
>>>float files to the EDS card, but sort of
>>>doubt it. I think Chuck has clarified this at one point, but don't recall
>>
>>>the answer.
>>>
>>>Also what Paris did is use a greater bit depth on the hardware than
>>>ProTools
>>
>>>did - at the time PT was just
>>>bring Mix+ systems to market, or they had been out for a year or two (if
>> I
>>>have my timeline right) - they
>>>were 24-bit fixed all the way through. Logic and Cubase were native DAWs,
>>
>>>but native was still too slow
>>>to compete with hardware hybrids. Paris trumped them all by running
>>>32-bit
>>
>>>float natively (not new really, but
>>>better than sticking to 24-bit) and 56 or so bits in hardware instead
of
>>
>>>going to Motorola DSPs at 24.
>>>The onboard effects were also a step up from anything out there, so the
>> demo
>>>did sound good.
>>>I don't recall which, but one of the demos, imho, wasn't so good (some
>>>sloppy production and
>>>vocals in spots, IIRC), so I only listened to it once. ;-)
>>>
>>>Coupled with the gain drop and buss makeup, this all gave it a "headroom"
>> no
>>>one else had. With very nice
>>>onboard effects, Paris jumped ahead of anything else out there easily,
and
>>
>>>still respectably holds its' own today
>>>in that department.
>>>
>>>Most demos I hear (when I listen to them) vary in quality, usually not
so
>>
>>>great in some area. But if a demo does
>>>sound great, then it at least says that the product is capable of at
>>>least
>>
>>>that level of performance, and it can
>>>only help improve a prospective buyer's impression of it.
>>>
>>>Regards,
>>>Dedric
>>>
>>>"LaMont " <jjdpro@ameritech.net> wrote in message news:458c14c0$1@linux...
>>>>
>>>> Dedric good post..
>>>>
>>>> However, I have PT-M-Powered/M-audio 410 interface for my laptop and
it
>>
>>>> has
>>>> that same sound (no eq, zero fader) that HD does. I know their use the
>>
>>>> same
>>>> 48 bit fix mixer. I load up the same file in Nuendo (no eq, zero
>>>> fader)..results.
>>>> different sonic character.
>>>>
>>>> PT having a top end touch..Nuendo, nice smooth(flat) sound. And I'm
just
>>>> taking about a stereo wav file nulled with no eq..nothing
>>>> ..zilch..nada..
>>>>
>>>> Now, there are devices (keyboards, dum machines) on the market today
>>>> that
>>>> have a Master Buss Compressor and EQ set to on with the top end notched
>>
>>>> up.
>>>> Why? because it gives their product an competitive advantageover the
>>>> competition..
>>>> Ex: Yahama's Motif ES, Akai's MPC 1000, 2500, Roland's Fantom.
>>>>
>>>> So, why would'nt a DAW manufactuer code in an extra (ooommf) to make
>>>> their
>>>> DAW sound better. Especially, given the "I hate Digtal Summing" crowd?
>>
>>>> And,
>>>> If I'm a DAW manufactuer, what would give my product a sonic edge over
>> the
>>>> competition?
>>>>
>>>> We live in the "louder is better" audio world these days, so a DAW that
>>
>>>> can
>>>> catch my attention 'sonically" will probaly will get the sell. That's
>> what
>>>> happend to me back in 1997 when I heard Paris. I was floored!!! Still
>> to
>>>> this day, nothing has floored me like that "Road House Blues Demo" I
>>>> heard
>>>> on Paris.
>>>>
>>>> Was it the hardware ? was it the software. I remember talking with
>>>> Edmund
>>>> at the 2000 winter Namm, and told me that he & Steve set out to
>>>> reproduce
>>>> the sonics of big buck analog board (eq's) and all.. And, summing was
>> a
>>>> big
>>>> big issue for them because they (ID) thought that nobody has gotten
>>>> it(summing)
>>>> right. And by right, they meant, behaved like a console with a wide
lane
>>>> for all of those tracks..
>>>>
>>>>
>>>>
>>>>
>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>"LaMont" <jjdpro@ameritech.net> wrote in message
>>>>>news:458be8d5$1@linux...
>>>>>>
>>>>>> Okay...
>>>>>> I guess what I'm saying is this:
>>>>>>
>>>>>> -Is it possible that diferent DAW manufactuers "code" their app
>>>>>> differently
>>>>>> for sound results.
>>>>>
>>>>>Of course it is *possible* to do this, but only if the DAW has a
>>>>>specific
>>>>
>>>>>sound shaping purpose
>>>>>beyond normal summing/mixing. Users talk about wanting developers to
>> add
>>>> a
>>>>>"Neve sound" or "API sound" option to summing engines,
>>>>>but that's really impractical given the amount of dsp required to make
>> a
>>>>
>>>>>decent emulation (with convolution, dynamic EQ functions,
>>>>>etc). For sake of not eating up all cpu processing, that could likely
>>
>>>>>only
>>>>
>>>>>surface as is a built in EQ, which
>>>>>no one wants universally in summing, and anyone can add at will already.
>>>>>
>>>>>So it hasn't happened yet and isn't likely to as it detours from the
>>>>>basic
>>>>
>>>>>tenant of audio recording - recreate what comes in as
>>>>>accurately as possible.
>>>>>
>>>>>What Digi did in recoding their summing engine was try to recover some
>>>>>of the damage done by the 24-bit buss in Mix systems. Motorola 56k dsps
>>>> are
>>>>>24-bit fixed point chips and I think
>>>>>the new generation (321?) still is, but they use double words now for
>>>>>48-bits). And though plugins could process at 48-bit by
>>>>>doubling up and using upper and lower 24-bit words for 48-bit outputs,
>> the
>>>>
>>>>>buss
>>>>>between chips was 24-bits, so they had to dither to 24-bits after every
>>>>
>>>>>plugin. The mixer (if I recall correctly) also
>>>>>had a 24-bit buss, so what Digi did is to add a dither stage to the
>>>>>mixer
>>>> to
>>>>>prevent this
>>>>>constant truncation of data. 24-bits isn't enough to cover summing
for
>>>> more
>>>>>than a few tracks without
>>>>>losing information in the 16-bit world, and in the 24-bit world some
>>>>>information will be lost, at least at the lowest levels.
>>>>>
>>>>>Adding a dither stage (though I think they did more than that - perhaps
>>>>
>>>>>implement a 48-bit double word stage as well),
>>>>>simply smoothed over the truncation that was happening, but it didn't
>>
>>>>>solve
>>>>
>>>>>the problem, so with HD
>>>>>they went to a double-word path - throughout I believe, including the
>> path
>>>>
>>>>>between chips. I believe the chips
>>>>>are still 24-bit, but by doubling up the processing (yes at a cost of
>>
>>>>>twice
>>>>
>>>>>the overhead), they get a 48-bit engine.
>>>>>This not only provided better headroom, but greater resolution. Higher
>>>> bit
>>>>>depths subdivide the amplitude with greater resolution, and that's
>>>>>really where we get the definition of dynamic range - by lowering the
>>
>>>>>signal
>>>>
>>>>>to quantization noise ratio.
>>>>>
>>>>>With DAWs that use 32-bit floating point math all the way through, the
>>
>>>>>only
>>>>
>>>>>reason for altering the summing
>>>>>is by error, and that's an error that would actually be hard to make
and
>>>> get
>>>>>past a very basic alpha stage of testing.
>>>>>There is a small difference in fixed point math and floating point math,
>>>> or
>>>>>at least a theoretical difference in how it affects audio
>>>>>in certain cases, but not necessarily in the result for calculating
gain
>>>> in
>>>>>either for the same audio file. Where any differences might show up
is
>>>>
>>>>>complicated, and I believe only appear at levels below 24-bit (or in
>>>>>headroom with tracks pushed beyond 0dBFS), or when/if
>>>>>there areany differences in where each amplitude level is quantized.
>>>>>
>>>>>Obviously there can be differences if the DAW has to use varying bit
>>>>>depths
>>>>
>>>>>throughout a single summing path to accomodate hardware
>>>>>as well as software summing, since there may be truncation or rounding
>>
>>>>>along
>>>>
>>>>>the way, but that impacts the lowest bit
>>>>>level, and hence - spacial reproduction, reverb tails perhaps, and
>>>>>"depth",
>>>>
>>>>>not the levels most music so the differences are most
>>>>>often more subtle than not. But most modern DAWs have eliminated those
>>>>
>>>>>"rough edges" in the math by increasing the bit depth to accomodate
>>>>>normal
>>>>
>>>>>summing required for mixing audio.
>>>>>
>>>>>So with Lynn's unity gain summing test (A files on the CD I believe),
>> DAWs
>>>>
>>>>>were never asked to sum beyond 24-bits,
>>>>>at least not on the upper end of the dynamic range, so everything that
>>
>>>>>could
>>>>
>>>>>represent 24-bits accurately would cancel. The only ones
>>>>>that didn't were ones that had a different bit depth and/or gain
>>>>>structure
>>>>
>>>>>whether hybrid or native
>>>>>(e.g. Paris' subtracting 20dB from tracks and adding it to the buss).
>> In
>>>>
>>>>>this case, PTHD cancelled (when I tested it) with
>>>>>Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed
>> vs.
>>>>
>>>>>32-bit float wasn't a factor.
>>>>>
>>>>>When trying other tests, even when adding and subtracting gain, Nuendo,
>>>>
>>>>>Sequoia and Sonar cancel - both audibly and
>>>>>visually at inaudible levels, which only proves that one isn't making
>> an
>>>>
>>>>>error when calculating basic gain. Since a dB is well defined,
>>>>>and the math to add gain is simple, they shouldn't. The fact that they
>>>> all
>>>>>use 32-bit float all the way through eliminates a difference
>>>>>in data structure as well, and this just verifies that. There was a
>>>>>time
>>>>
>>>>>that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor went,
>>>>>but it's 32-bit float all the way through now just as Sonar,
>>>>>Nuendo/Cubase,
>>>>
>>>>>Samplitude/Sequoia, DP, Audition (I presume at least).
>>>>>I don't know what Acid or Live use. Saw promotes a fixed point engine,
>>>> but
>>>>>I don't know if it is still 24-bit, or now 48 bit.
>>>>>That was an intentional choice by the developer, but he's the only one
>> I
>>>>
>>>>>know of that stuck with 24-bit for summing
>>>>>intentionally, esp. after the Digi Mix system mixer incident.
>>>>>
>>>>>Long answer, but to sum up, it is certainly physically *possible* for
>> a
>>>>
>>>>>developer to code something differently intentionally, but not
>>>>>in reality likely since it would be breaking some basic fixed point
or
>>>>>floating point math rules. Where the differences really
>>>>>showed up in the past is with PT Mix systems where the limitation was
>>
>>>>>really
>>>>
>>>>>significant - e.g. 24 bit with truncation at several stages.
>>>>>
>>>>>That really isn't such an issue anymore. Given the differences in
>>>>>workflow,
>>>>
>>>>>missing something in workflow or layout differences
>>>>>is easy enough to do (e.g. Sonar doesn't have group and busses the way
>>>>>Nuendo does, as it's outputs are actually driver outputs,
>>>>>not software busses, so in Sonar, busses are actually outputs, and sub
>>>>>busses are actually busses in Nuendo. There are no,
>>>>>or at least I haven't found the equivalent of a Nuendo group in Sonar
>> -
>>>> that
>>>>>affects the results of some tests (though not basic
>>>>>summing) if not taken into account, but when taken into account, they
>> work
>>>>
>>>>>exactly the same way).
>>>>>
>>>>>So at least when talking about apps with 32-bit float all the way
>>>>>through,
>>>>
>>>>>it's safe to say (since it has been proven) that summing isn't different
>>>>
>>>>>unless
>>>>>there is an error somewhere, or variation in how the user duplicates
the
>>>>
>>>>>same mix in two different apps.
>>>>>
>>>>>Imho, that's actually a very good thing - approaching a more consistent
>>>>
>>>>>basis for recording and mixing from which users can make all
>>>>>of the decisions as to how the final product will sound and not be
>>>>>required
>>>>
>>>>>to decide when purchasing a pricey console, and have to
>>>>>focus their business on clients who want "that sound". I believe we
are
>>>>
>>>>>actually closer to the pure definition of recording now than
>>>>>we once were.
>>>>>
>>>>>Regards,
>>>>>Dedric
>>>>>
>>>>>
>>>>>>
>>>>>> I the answer is yes, then,the real task is to discover or rather
>>>>>> un-cover
>>>>>> what's say: Motu's vision of summing, versus Digidesign, versus
>>>>>> Steinberg
>>>>>> and so on..
>>>>>>
>>>>>> What's under the hood. To me and others,when Digi re-coded their
>>>>>> summing
>>>>>> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
>>>>
>>>>>> bump.
>>>>>> Where as Steinberg's summing is very neutral.
>>>>>>
>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>Hi Neil,
>>>>>>>
>>>>>>>Jamie is right. And you aren't wacked out - you are thinking this
>>>>>>>through
>>>>>>
>>>>>>>in a reasonable manner, but coming to the wrong
>>>>>>>conclusion - easy to do given how confusing digital audio can be.
>>>>>>>Each
>>>>>> word
>>>>>>>represents an amplitude
>>>>>>>point on a single curve that is changing over time, and can vary with
>>>> a
>>>>>>
>>>>>>>speed up to the Nyquist frequency (as Jamie described).
>>>>>>>The complex harmonic content we hear is actually the frequency
>>>>>>>modulation
>>>>>> of
>>>>>>>a single waveform,
>>>>>>>that over a small amount of time creates the sound we translate -
we
>>
>>>>>>>don't
>>>>>>
>>>>>>>really hear a single sample at a time,
>>>>>>>but thousands of samples at a time (1 sample alone could at most
>>>>>>>represent
>>>>>> a
>>>>>>>single positive or negative peak
>>>>>>>of a 22,050Hz waveform).
>>>>>>>
>>>>>>>If one bit doesn't cancel, esp. if it's a higher order bit than number
>>>> 24,
>>>>>>
>>>>>>>you may hear, and will see that easily,
>>>>>>>and the higher the bit in the dynamic range (higher order) the more
>>>>>>>audible
>>>>>>
>>>>>>>the difference.
>>>>>>>Since each bit is 6dB of dynamic range, you can extrapolate how "loud"
>>>>
>>>>>>>that
>>>>>>
>>>>>>>bit's impact will be
>>>>>>>if there is a variation.
>>>>>>>
>>>>>>>Now, obviously if we are talking about 1 sample in a 44.1k rate song,
>>>> then
>>>>>>
>>>>>>>it simply be a
>>>>>>>click (only audible if it's a high enough order bit) instead of an
>>>>>>>obvious
>>>>>>
>>>>>>>musical difference, but that should never
>>>>>>>happen in a phase cancellation test between identical files higher
>>>>>>>than
>>>>>> bit
>>>>>>>24, unless there are clock sync problems,
>>>>>>>driver issues, or the DAW is an early alpha version. :-)
>>>>>>>
>>>>>>>By definition of what DAWs do during playback and record, every audio
>>>>
>>>>>>>stream
>>>>>>
>>>>>>>has the same point in time (judged by the timeline)
>>>>>>>played back sample accurately, one word at a time, at whatever sample
>>>>
>>>>>>>rate
>>>>>>
>>>>>>>we are using. A phase cancellation test uses that
>>>>>>>fact to compare two audio files word for word (and hence bit for bit
>>
>>>>>>>since
>>>>>>
>>>>>>>each bit of a 24-bit word would
>>>>>>>be at the same bit slot in each 24-bit word). Assuming they are
>>>>>>>aligned
>>>>>> to
>>>>>>>the same start point, sample
>>>>>>>accurately, and both are the same set of sample words at each sample
>>>>>>>point,
>>>>>>
>>>>>>>bit for bit, and one is phase inverted,
>>>>>>>they will cancel through all 24 bits. For two files to cancel
>>>>>>>completely
>>>>>>
>>>>>>>for the duration of the file, each and every bit in each word
>>>>>>>must be the exact opposite of that same bit position in a word at
the
>>>> same
>>>>>>
>>>>>>>sample point. This is why zooming in on an FFT
>>>>>>>of the full difference file is valuable as it can show any differences
>>>> in
>>>>>>
>>>>>>>the lower order bits that wouldn't be audible. So even if
>>>>>>>there is no audible difference, the visual followup will show if the
>> two
>>>>>>
>>>>>>>files truly cancel even a levels below hearing, or
>>>>>>>outside of a frequency change that we will perceive.
>>>>>>>
>>>>>>>When they don't cancel, usually there will be way more than 1 bit
>>>>>>>difference - it's usually one or more bits in the words for
>>>>>>>thousands of samples. From a musical standpoint this is usually in
>> a
>>>>>>>frequency range (low freq, or high freq most often) - that will
>>>>>>>show up as the difference between them, and that usually happens due
>> to
>>>>>> some
>>>>>>>form of processing difference between the files,
>>>>>>>such as EQ, compression, frequency dependant gain changes, etc. That
>> is
>>>>>> what
>>>>>>>I believe you are thinking through, but when
>>>>>>>talking about straight summing with no gain change (or known equal
>>>>>>>gain
>>>>>>
>>>>>>>changes), we are only looking at linear, one for one
>>>>>>>comparisons between the two files' frequency representations.
>>>>>>>
>>>>>>>Regards,
>>>>>>>Dedric
>>>>>>>
>>>>>>>> Neil wrote:
>>>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>>> The tests I did were completely blank down to -200 dB (far below
>> the
>>>>>>
>>>>>>>>>> last
>>>>>>>>>
>>>>>>>>>> bit). It's safe to say there is no difference, even in
>>>>>>>>>> quantization noise, which by technical rights, is considered below
>>>> the
>>>>>>
>>>>>>>>>> level
>>>>>>>>>
>>>>>>>>>> of "cancellation" in such tests.
>>>>>>>>>
>>>>>>>>> I'm not necessarily talking about just the first bit or the
>>>>>>>>> last bit, but also everything in between... what happens on bit
>>>>>>>>> #12, for example? Everything on bit #12 should be audible, but
>>>>>>>>> in an a/b test what if thre are differences in what bits #8
>>>>>>>>> through #12 sound like, but the amplutide is stll the same on
>>>>>>>>> both files at that point, you'll get a null, right? Extrapolate
>>>>>>>>> that out somewhat & let's say there are differences in bits #8
>>>>>>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>>>>>>> etc through 43,972... Now this is breaking things down well
>>>>>>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>>>>>>> dn't know of any way we could extract JUST that information
>>>>>>>>> from each file & play it back for an a/b test; but would not
>>>>>>>>> that be enough to have to "null-able" files that do actually
>>>>>>>>> sound somewhat different?
>>>>>>>>>
>>>>>>>>> I guess what I'm saying is that since each sample in a musical
>>>>>>>>> track or full song file doesn't represent a pure, simple set of
>>>>>>>>> content like a sample of a sine wave would - there's a whole
>>>>>>>>> world of harmonic structure in each sample of a song file, and
>>>>>>>>> I think (although I'll admit - I can't "prove") that there is
>>>>>>>>> plenty of room for some variables between the first bit & the
>>>>>>>>> last bit while still allowing for a null test to be successful.
>>>>>>>>>
>>>>>>>>> No? Am I wacked out of my mind?
>>>>>>>>>
>>>>>>>>> Neil
>>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
>
|
|
|
Re: (No subject)...What's up inder the hood? [message #77353 is a reply to message #77343] |
Sat, 23 December 2006 08:06 |
chuck duffy
Messages: 453 Registered: July 2005
|
Senior Member |
|
|
Fredos post was really good. At some point, for performance reasons developers
may choose to implement accumulators in CPU registers . This is most likely
what fredo calls an adder, I think.
It's possible that developers of other products don't do this, but the tradeoff
would be increased CPU and FPU utilization.
Fredo said there are three points where truncation occurs. In paris, at
least to my understanding there are many more than that.
Your not supposed to be able to hear this truncation distortion, but how
the hell knows, and that's not math :-)
Chuck
"LaMOnt" <jjdpro@ameritech.net> wrote:
>
>Dedric, check out this post from our dear friend Fredo: Neundo Moderator:
>Explaining how Steingberg's audio engine works. Note the trade-offs..Meaning,
>Steinberg's way of coding an audio-engine 32bit float is different than
say
>Magix Samplitude:
>
>Fredo
>Administrative Moderator
>
>
>Joined: 29 Dec 2004
>Posts: 4213
>Location: Belgium
> Posted: Fri Dec 08, 2006 2:33 pm Post subject:
>
> ------------------------------------------------------------ --------------------
>
>I think I see where the problem is.
>In my scenario's I don't have any track that goes over 0dBfs, but I have
>always lowered one channel to compensate with another.
>So, I never whent over the 0dB fs limit.
>
>Here's the explanation:
>
>As soon as you go over 0dB, technically you are entering the domain of distortion.
>
>In a 32bit FP mixer, that is not the case since there is unlimited headroom.
>
>
>Now follow me step by step please - read this slow and make sure you understand
>-
>
>At the end of each "stage", there is an adder (a big calculator) which adds
>all the numbers from the individual tracks that are routed to this "adder".
>
>The numbers are kept in the 80-bit registers and then brought back to 32bit
>float.
>This process of bringing back the numbers from 80-bit (and more) to 32bit
>is kept to an absolute minimum.
>This adding/bringing back to 32bit is done at 3 places: After a plugin slot
>(VST-specs for all plugin manufacturers) - Group Tracks and Master Tracks.
>
>
>Now, as soon as you boost the volume above 0dB, you get more than 32bits.
>Stay below 0dB and you will stay below 32 bits.
>When the adders dump their results, the numbers are brought back from any
>number of bits (say 60bit) to 32 bit float.
>These numbers are simply truncated which results in distortion; that's the
>noise/residue you find way down low.
>There is an algortithm that protects us from additive errors - so these
errors
>can never come into the audible range.
>So, as soon as you go over 0dB, you will see these kind of artifacts.
>
>It is debatable if this needs to be dithered or not. The problem -still
is-
>that it is very difficult to dither in a Floating Point environment.
>Fact remains that the error shouldn't be bigger than 2 to 3 LSB's.
>
>Is this a problem?
>In real world applictations: NO.
>In scientific -unrealistic- tests (forcing the erro ): YES.
>
>The alternative is having a Fixed point mixer, where you already would be
>in trouble as soon as you boost one channel over 0dBfs. (or merge two files
>that are @ 0dB)
>Also, this problem will be pretty much gone as soon as we switch to the
64
>bit engine.
>
>
>For the record, the test where Jake hears "music" as residue must be flawed.
>You should hear noise/distortion from square waves.
>
>HTH
>
>Fredo
>
>
>
>
>
>"Dedric Terry" <dedric@echomg.com> wrote:
>>I can't tell you why you hear ProTools differently than Nuendo using a
>>single file.
>>There isn't any voodoo in the software, or hidden character enhancing dsp.
>
>>I'll see if
>>I can round up an M-Powered system to compare with next month.
>>
>>For reference, everytime I open Sequoia I think I might hear a broader,
>
>>clean,
>>and almost flat (spectrum, not depth) sound, but I don't - it's the same
>as
>>Nuendo, fwiw.
>>Also I don't think what I was referring to was a theory from Chuck - I
>
>>believe that was what he
>>discovered in the code.
>>
>>Digital mixers all have different preamps and converters. Unless you are
>
>>bypassing every
>>EQ and converter and going digital in and out to the same converter when
>
>>comparing, it would be hard
>>to say the mix engine itself sounds different than another mixer, but taken
>
>>as a whole, then
>>certainly they may very well sound different. In addition, hardware digital
>>mixers may use a variety of different paths between the I/O, channel
>>processing, and summing,
>>though most are pretty much software mixers on a single chip or set of
dsps
>
>>similar to ProTools,
>>with I/O and a hardware surface attached.
>>
>>I know it may be hard to separate the mix engine as software in either
a
>
>>native DAW
>>or a digital mixer, from the hardware that translates the audio to something
>
>>we hear,
>>but that's what is required when comparing summing. The hardware can
>>significantly change
>>what we hear, so comparing digital mixers really isn't of as much interest
>
>>as comparing native
>>DAWs in that respect - unless you are looking to buy one of course.
>>
>>Even though I know you think manufacturers are trying to add something
to
>
>>give them an edge, I am 100%
>>sure that isn't the case - rather they are trying to add or change as little
>
>>as possible in order to give
>>them the edge. Their end of digital audio isn't about recreating the past,
>
>>but improving upon it.
>>As we've discussed and agreed before, the obsession with recreating
>>"vintage" technology is as much
>>fad as it is a valuable creative asset. There is no reason we shouldn't
>
>>have far superior hardware and software EQs and comps
>>than 20, 30 or 40 years ago. No reason at all, other than market demand,
>
>>but the majority of software, and new
>>hardware gear on the market has a vintage marketing tagline with it.
>>Companies will sell any bill of
>>goods if customers will buy it.
>>
>>There's nothing unique about the summing in Nuendo, Cubase, Sequoia/Samp,
>>or Sonar, and it's pretty safe to include Logic and DP in that list as
well.
>
>>One of the reasons I test
>>these things is to be sure my DAW isn't doing something wrong, or something
>
>>I don't know about.
>>
>>Vegas - I use it for video conversions and have never done any critical
>
>>listening tests with it. What I have heard
>>briefly didn't sound any different. It certainly looks plain vanilla
>>though. What you are describing is exactly
>>what I would say about the GUIs of each of those apps, not that it means
>
>>anything. Just interesting.
>>
>>That's one reason I listen eyes closed and double check with phase
>>cancellation tests and FFTs - I am
>>influenced creatively by the GUI to some degree. I actually like Cubase
>4's
>>GUI better than Nuendo 3.2,
>>though there are only slight visual differences (some workflow differences
>
>>are a definite improvement for me though).
>>
>>ProTools' GUI always made me want to write one dimensional soundtracks
in
>
>>mono for public utilities, accounting offices
>>or the IRS while reading my discreet systems analysis textbook - it was
>also
>>grey. ;-)
>>
>>Regards,
>>Dedric
>>
>>"LaMont" <jjdpro@ameritech.net> wrote in message news:458c82fd$1@linux...
>>>
>>> Dedric, my simple test is simple..
>>> Using the same audio interface, with the same stereo file..null-ed to
>
>>> zero..No
>>> eq, for fx. Master fader on zero..
>>>
>>> Nuendo, Pro-Tools -Mpowered(native)... yields a sonic difference that
>I
>>> have
>>> referenced before.. The sound coming from PT-M has a nice top end , where
>>> as Neundo has a nice flatter sound quality.
>>> Same audio interface. M-audio 410..Using Mackies & Blue-Sky pro monitors..
>>>
>>> Same test at the big room..PT-HD & Neundo Logic Audio(macG5-Dual) Using
>
>>> the
>>> 192 interface.
>>> Same results..But adding Logic audio's sound ..(Broad, thick)
>>>
>>> Somethings going on.
>>>
>>> Chucks post about how paris handles audio is a theory..Only Edmund can
>
>>> truly
>>> give us the goods on what's really what..
>>>
>>> I disagree that manufactuers don;t set out o put a sonic print on their
>
>>> products.
>>> I think they do.
>>>
>>> I have been fortunate to work on some digital mixers and I can tell you
>
>>> that
>>> each one has their own sound. The Sony Dmx-100 was modeled after SSL
4000g
>>> (like it's Big Brother).And you what? That board (Dmx-100) sound very
>warm
>>> and it's eq tries to behave and sound just like an SSL.. Unlike he Yamaha
>>> Dm2000(version 1.x) which has a very Clean, neutral sound..However, some
>>> complained that it was tooo Vanila and thus, Yamaha add a version 2.0
>
>>> which
>>> added Vintage type Eq's, modeled analog input gain saturation fx too
give
>>> the user a choice Btw Clean and Neutral vs sonic Character.
>>>
>>> So, if digital conoles can be given a sonic character, why not a software
>>> mixer?
>>> The truth is, there are some folks who want a neutral mixer and then
there
>>> are others who want a sonic footprint imparted. and these can be coded
>in
>>> the digital realm.
>>> The apllies with the manufactuers. They too have their vision on what
>They
>>> think and want their product to sound.
>>>
>>> I love reading on gearslutz the posts from Plugin developers and their
>
>>> interpretations
>>> and opinions about what makes their Neve 1073 Eq better and what goes
>into
>>> making their version sound like it does.. Each Developer has a different
>>> vision as to what the Neve 1073 should sound like. And yet they all sound
>>> good , but slightly different.
>>>
>>> You stated that you use Vegas. Well as you know, Vegas has a very generic
>>> sound..Just plain and simple. But, i bet you can tell the difference
>on
>>> your system when you play that same file in Neundo (No, fx, eq,
>>> null-edzerro)..
>>> ???
>>>
>>>
>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>Lamont - what is the output chain you are using for each app when
>>>>comparing
>>>
>>>>the file in Nuendo
>>>>vs ProTools? On the same PC, I presume (and is this PT HD or M-Powered?)?
>>>>Since these can't use the same output driver, you would have to depend
>on
>>>
>>>>the D/A being
>>>>the same, but clocking will be different unless you have a master clock,
>>> and
>>>>both interfaces
>>>>are locking with the same accuracy. This was one of the issues that
came
>>> up
>>>>at Lynn Fuston's
>>>>D/A converter shootout - when do you lock to external clock and incur
>the
>>>
>>>>resulting jitter,
>>>>and when do you trust the internal clock - and if you do lock externally,
>>>
>>>>how good is the PLL
>>>>in the slave device? These issues can cause audible changes in the top
>>> end
>>>>that have nothing to do
>>>>with the software itself. If you say that PTHD through the same converter
>>>
>>>>output as Nuendo (via? RME?
>>>>Lynx?) using the same master clock, sounds different playing a single
>
>>>>audio
>>>
>>>>file, then I take your word
>>>>for it. I can't tell you why that is happening - only that an audible
>>>>difference really shouldn't happen due
>>>>to the software alone - not with a single audio file, esp. since I've
>
>>>>heard
>>>
>>>>and seen PTHD audio cancel with
>>>>native DAWs. Just passing a single 16 or 24 bit track down the buss
>to
>>> the
>>>>output driver should
>>>>be, and usually is, completely transparent, bit for bit.
>>>>
>>>>The same audio file played through the same converters should only sound
>>>
>>>>different if something in
>>>>the chain is different - be it clocking, gain or some degree of
>>>>unintended,
>>>
>>>>errant dsp processing. Every DAW should
>>>>pass a single audio file without altering a single bit. That's a basic
>
>>>>level
>>>
>>>>of accuracy we should always
>>>>expect of any DAW. If that accuracy isn't there, you can be sure a heavy
>>>
>>>>mix will be altered in ways you
>>>>didn't intend, even though you would end up mixing with that factor in
>
>>>>place
>>>
>>>>(e.g. you still mix for what
>>>>you want to hear regardless of what the platform does to each audio track
>>> or
>>>>channel).
>>>>
>>>>In fact you should be able to send a stereo audio track out SPDIF or
>>>>lightpipe to another DAW, record it
>>>>bring the recorded file back in, line them up to the first bit, and have
>>>
>>>>them cancel on and inverted phase
>>>>test. I did this with Nuendo and Cubase 4 on separate machines just
to
>>> be
>>>>sure my master clocking and
>>>>slave sync was accurate - it worked perfectly.
>>>>
>>>>Also be sure there isn't a variation in the gain even by 0.1 dB between
>>> the
>>>>two. There shouldn't
>>>>and I wouldn't expect there to be one. Also could PT be set for a
>>>>different
>>>
>>>>pan law? Shouldn't make a
>>>>difference even if comparing two mono panned files to their stereo
>>>>interleaved equivalent, but for sake
>>>>of completeness it's worth checking as well. A variation in the output
>>>
>>>>chain, be it drivers, audio card
>>>>card, or converters would be the most likely culprit here.
>>>>
>>>>The reason DAW manufacturers wouldn't add any sonic "character"
>>>>intentionally is that the
>>>>ultimate goal from day one with recording has been to accurately reproduce
>>>
>>>>what we hear.
>>>>We developed a musical penchant for sonic character because the hardware
>>>
>>>>just wasn't accurate,
>>>>and what it did often sent us down new creative paths - even if by force
>>> -
>>>>and we decided it was
>>>>preferred that way.
>>>>
>>>>Your point about what goes into the feature presets to sell synths is
>
>>>>right
>>>
>>>>for sure, but synths are about
>>>>character and getting that "perfect piano" or crystal clear bell pad,
>or
>>> fat
>>>>punchy bass without spending
>>>>a mint on development, adding 50G onboard sample libraries, or costing
>
>>>>$15k,
>>>
>>>>so what they
>>>>lack in actual synthesis capabilities, they make up with EQ and effects
>>> on
>>>>the output. That's been the case
>>>>for years, at least since we had effects on synths at least. But even
>
>>>>with
>>>
>>>>modern synths such as the Fantom,
>>>>Tritons, etc, which are great synths all around, of course the coolest,
>>>
>>>>widest and biggest patches
>>>>will make the biggest impression - so in come the EQs, limiters, comps,
>>>
>>>>reverbs, chorus, etc. The best
>>>>way to find out if a synth is really good is to bypass all effects and
>see
>>>
>>>>what happens. Most are pretty
>>>>good these days, but about half the time, there are presets that fall
>>>>completely flat in fx bypass.
>>>>
>>>>DAWs aren't designed to put a sonic fingerprint on a sound the way synths
>>>
>>>>are - they are designed
>>>>to *not* add anything - to pass through what we create as users, with
>no
>>>
>>>>alteration (or as little as possible)
>>>>beyond what we add with intentional processing (EQ, comps, etc).
>>>>Developers
>>>
>>>>would find no pride
>>>>in hearing that their DAW sounds anything different than whatever is
being
>>>
>>>>played back in it,
>>>>and the concept is contrary to what AES and IEEE proceedings on the issue
>>>
>>>>propose in general
>>>>digital audio discussions, white papers, etc.
>>>>
>>>>What ID ended up doing with Paris (at least from what I gather per Chuck's
>>>
>>>>findings - so correct me if I'm missing part of the equation Chuck),
>>>>is drop the track gain by 20dB or so, then added it back at the master
>
>>>>buss
>>>
>>>>to create the effect of headroom (probably
>>>>because the master buss is really summing on the card, and they have
more
>>>
>>>>headroom there than on the tracks
>>>>where native plugins might be used). I don't know if Paris passed 32-bit
>>>
>>>>float files to the EDS card, but sort of
>>>>doubt it. I think Chuck has clarified this at one point, but don't recall
>>>
>>>>the answer.
>>>>
>>>>Also what Paris did is use a greater bit depth on the hardware than
>>>>ProTools
>>>
>>>>did - at the time PT was just
>>>>bring Mix+ systems to market, or they had been out for a year or two
(if
>>> I
>>>>have my timeline right) - they
>>>>were 24-bit fixed all the way through. Logic and Cubase were native
DAWs,
>>>
>>>>but native was still too slow
>>>>to compete with hardware hybrids. Paris trumped them all by running
>>>>32-bit
>>>
>>>>float natively (not new really, but
>>>>better than sticking to 24-bit) and 56 or so bits in hardware instead
>of
>>>
>>>>going to Motorola DSPs at 24.
>>>>The onboard effects were also a step up from anything out there, so the
>>> demo
>>>>did sound good.
>>>>I don't recall which, but one of the demos, imho, wasn't so good (some
>>>>sloppy production and
>>>>vocals in spots, IIRC), so I only listened to it once. ;-)
>>>>
>>>>Coupled with the gain drop and buss makeup, this all gave it a "headroom"
>>> no
>>>>one else had. With very nice
>>>>onboard effects, Paris jumped ahead of anything else out there easily,
>and
>>>
>>>>still respectably holds its' own today
>>>>in that department.
>>>>
>>>>Most demos I hear (when I listen to them) vary in quality, usually not
>so
>>>
>>>>great in some area. But if a demo does
>>>>sound great, then it at least says that the product is capable of at
>
>>>>least
>>>
>>>>that level of performance, and it can
>>>>only help improve a prospective buyer's impression of it.
>>>>
>>>>Regards,
>>>>Dedric
>>>>
>>>>"LaMont " <jjdpro@ameritech.net> wrote in message news:458c14c0$1@linux...
>>>>>
>>>>> Dedric good post..
>>>>>
>>>>> However, I have PT-M-Powered/M-audio 410 interface for my laptop and
>it
>>>
>>>>> has
>>>>> that same sound (no eq, zero fader) that HD does. I know their use
the
>>>
>>>>> same
>>>>> 48 bit fix mixer. I load up the same file in Nuendo (no eq, zero
>>>>> fader)..results.
>>>>> different sonic character.
>>>>>
>>>>> PT having a top end touch..Nuendo, nice smooth(flat) sound. And I'm
>just
>>>>> taking about a stereo wav file nulled with no eq..nothing
>>>>> ..zilch..nada..
>>>>>
>>>>> Now, there are devices (keyboards, dum machines) on the market today
>
>>>>> that
>>>>> have a Master Buss Compressor and EQ set to on with the top end notched
>>>
>>>>> up.
>>>>> Why? because it gives their product an competitive advantageover the
>>>>> competition..
>>>>> Ex: Yahama's Motif ES, Akai's MPC 1000, 2500, Roland's Fantom.
>>>>>
>>>>> So, why would'nt a DAW manufactuer code in an extra (ooommf) to make
>
>>>>> their
>>>>> DAW sound better. Especially, given the "I hate Digtal Summing" crowd?
>>>
>>>>> And,
>>>>> If I'm a DAW manufactuer, what would give my product a sonic edge over
>>> the
>>>>> competition?
>>>>>
>>>>> We live in the "louder is better" audio world these days, so a DAW
that
>>>
>>>>> can
>>>>> catch my attention 'sonically" will probaly will get the sell. That's
>>> what
>>>>> happend to me back in 1997 when I heard Paris. I was floored!!! Still
>>> to
>>>>> this day, nothing has floored me like that "Road House Blues Demo"
I
>
>>>>> heard
>>>>> on Paris.
>>>>>
>>>>> Was it the hardware ? was it the software. I remember talking with
>>>>> Edmund
>>>>> at the 2000 winter Namm, and told me that he & Steve set out to
>>>>> reproduce
>>>>> the sonics of big buck analog board (eq's) and all.. And, summing was
>>> a
>>>>> big
>>>>> big issue for them because they (ID) thought that nobody has gotten
>>>>> it(summing)
>>>>> right. And by right, they meant, behaved like a console with a wide
>lane
>>>>> for all of those tracks..
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>"LaMont" <jjdpro@ameritech.net> wrote in message
>>>>>>news:458be8d5$1@linux...
>>>>>>>
>>>>>>> Okay...
>>>>>>> I guess what I'm saying is this:
>>>>>>>
>>>>>>> -Is it possible that diferent DAW manufactuers "code" their app
>>>>>>> differently
>>>>>>> for sound results.
>>>>>>
>>>>>>Of course it is *possible* to do this, but only if the DAW has a
>>>>>>specific
>>>>>
>>>>>>sound shaping purpose
>>>>>>beyond normal summing/mixing. Users talk about wanting developers
to
>>> add
>>>>> a
>>>>>>"Neve sound" or "API sound" option to summing engines,
>>>>>>but that's really impractical given the amount of dsp required to make
>>> a
>>>>>
>>>>>>decent emulation (with convolution, dynamic EQ functions,
>>>>>>etc). For sake of not eating up all cpu processing, that could likely
>>>
>>>>>>only
>>>>>
>>>>>>surface as is a built in EQ, which
>>>>>>no one wants universally in summing, and anyone can add at will already.
>>>>>>
>>>>>>So it hasn't happened yet and isn't likely to as it detours from the
>
>>>>>>basic
>>>>>
>>>>>>tenant of audio recording - recreate what comes in as
>>>>>>accurately as possible.
>>>>>>
>>>>>>What Digi did in recoding their summing engine was try to recover some
>>>>>>of the damage done by the 24-bit buss in Mix systems. Motorola 56k
dsps
>>>>> are
>>>>>>24-bit fixed point chips and I think
>>>>>>the new generation (321?) still is, but they use double words now for
>>>>>>48-bits). And though plugins could process at 48-bit by
>>>>>>doubling up and using upper and lower 24-bit words for 48-bit outputs,
>>> the
>>>>>
>>>>>>buss
>>>>>>between chips was 24-bits, so they had to dither to 24-bits after every
>>>>>
>>>>>>plugin. The mixer (if I recall correctly) also
>>>>>>had a 24-bit buss, so what Digi did is to add a dither stage to the
>
>>>>>>mixer
>>>>> to
>>>>>>prevent this
>>>>>>constant truncation of data. 24-bits isn't enough to cover summing
>for
>>>>> more
>>>>>>than a few tracks without
>>>>>>losing information in the 16-bit world, and in the 24-bit world some
>>>>>>information will be lost, at least at the lowest levels.
>>>>>>
>>>>>>Adding a dither stage (though I think they did more than that - perhaps
>>>>>
>>>>>>implement a 48-bit double word stage as well),
>>>>>>simply smoothed over the truncation that was happening, but it didn't
>>>
>>>>>>solve
>>>>>
>>>>>>the problem, so with HD
>>>>>>they went to a double-word path - throughout I believe, including the
>>> path
>>>>>
>>>>>>between chips. I believe the chips
>>>>>>are still 24-bit, but by doubling up the processing (yes at a cost
of
>>>
>>>>>>twice
>>>>>
>>>>>>the overhead), they get a 48-bit engine.
>>>>>>This not only provided better headroom, but greater resolution. Higher
>>>>> bit
>>>>>>depths subdivide the amplitude with greater resolution, and that's
>>>>>>really where we get the definition of dynamic range - by lowering the
>>>
>>>>>>signal
>>>>>
>>>>>>to quantization noise ratio.
>>>>>>
>>>>>>With DAWs that use 32-bit floating point math all the way through,
the
>>>
>>>>>>only
>>>>>
>>>>>>reason for altering the summing
>>>>>>is by error, and that's an error that would actually be hard to make
>and
>>>>> get
>>>>>>past a very basic alpha stage of testing.
>>>>>>There is a small difference in fixed point math and floating point
math,
>>>>> or
>>>>>>at least a theoretical difference in how it affects audio
>>>>>>in certain cases, but not necessarily in the result for calculating
>gain
>>>>> in
>>>>>>either for the same audio file. Where any differences might show up
>is
>>>>>
>>>>>>complicated, and I believe only appear at levels below 24-bit (or in
>>>>>>headroom with tracks pushed beyond 0dBFS), or when/if
>>>>>>there areany differences in where each amplitude level is quantized.
>>>>>>
>>>>>>Obviously there can be differences if the DAW has to use varying bit
>>>>>>depths
>>>>>
>>>>>>throughout a single summing path to accomodate hardware
>>>>>>as well as software summing, since there may be truncation or rounding
>>>
>>>>>>along
>>>>>
>>>>>>the way, but that impacts the lowest bit
>>>>>>level, and hence - spacial reproduction, reverb tails perhaps, and
>>>>>>"depth",
>>>>>
>>>>>>not the levels most music so the differences are most
>>>>>>often more subtle than not. But most modern DAWs have eliminated those
>>>>>
>>>>>>"rough edges" in the math by increasing the bit depth to accomodate
>
>>>>>>normal
>>>>>
>>>>>>summing required for mixing audio.
>>>>>>
>>>>>>So with Lynn's unity gain summing test (A files on the CD I believe),
>>> DAWs
>>>>>
>>>>>>were never asked to sum beyond 24-bits,
>>>>>>at least not on the upper end of the dynamic range, so everything that
>>>
>>>>>>could
>>>>>
>>>>>>represent 24-bits accurately would cancel. The only ones
>>>>>>that didn't were ones that had a different bit depth and/or gain
>>>>>>structure
>>>>>
>>>>>>whether hybrid or native
>>>>>>(e.g. Paris' subtracting 20dB from tracks and adding it to the buss).
>>> In
>>>>>
>>>>>>this case, PTHD cancelled (when I tested it) with
>>>>>>Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed
>>> vs.
>>>>>
>>>>>>32-bit float wasn't a factor.
>>>>>>
>>>>>>When trying other tests, even when adding and subtracting gain, Nuendo,
>>>>>
>>>>>>Sequoia and Sonar cancel - both audibly and
>>>>>>visually at inaudible levels, which only proves that one isn't making
>>> an
>>>>>
>>>>>>error when calculating basic gain. Since a dB is well defined,
>>>>>>and the math to add gain is simple, they shouldn't. The fact that
they
>>>>> all
>>>>>>use 32-bit float all the way through eliminates a difference
>>>>>>in data structure as well, and this just verifies that. There was
a
>
>>>>>>time
>>>>>
>>>>>>that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor
went,
>>>>>>but it's 32-bit float all the way through now just as Sonar,
>>>>>>Nuendo/Cubase,
>>>>>
>>>>>>Samplitude/Sequoia, DP, Audition (I presume at least).
>>>>>>I don't know what Acid or Live use. Saw promotes a fixed point engine,
>>>>> but
>>>>>>I don't know if it is still 24-bit, or now 48 bit.
>>>>>>That was an intentional choice by the developer, but he's the only
one
>>> I
>>>>>
>>>>>>know of that stuck with 24-bit for summing
>>>>>>intentionally, esp. after the Digi Mix system mixer incident.
>>>>>>
>>>>>>Long answer, but to sum up, it is certainly physically *possible* for
>>> a
>>>>>
>>>>>>developer to code something differently intentionally, but not
>>>>>>in reality likely since it would be breaking some basic fixed point
>or
>>>>>>floating point math rules. Where the differences really
>>>>>>showed up in the past is with PT Mix systems where the limitation was
>>>
>>>>>>really
>>>>>
>>>>>>significant - e.g. 24 bit with truncation at several stages.
>>>>>>
>>>>>>That really isn't such an issue anymore. Given the differences in
>>>>>>workflow,
>>>>>
>>>>>>missing something in workflow or layout differences
>>>>>>is easy enough to do (e.g. Sonar doesn't have group and busses the
way
>>>>>>Nuendo does, as it's outputs are actually driver outputs,
>>>>>>not software busses, so in Sonar, busses are actually outputs, and
sub
>>>>>>busses are actually busses in Nuendo. There are no,
>>>>>>or at least I haven't found the equivalent of a Nuendo group in Sonar
>>> -
>>>>> that
>>>>>>affects the results of some tests (though not basic
>>>>>>summing) if not taken into account, but when taken into account, they
>>> work
>>>>>
>>>>>>exactly the same way).
>>>>>>
>>>>>>So at least when talking about apps with 32-bit float all the way
>>>>>>through,
>>>>>
>>>>>>it's safe to say (since it has been proven) that summing isn't different
>>>>>
>>>>>>unless
>>>>>>there is an error somewhere, or variation in how the user duplicates
>the
>>>>>
>>>>>>same mix in two different apps.
>>>>>>
>>>>>>Imho, that's actually a very good thing - approaching a more consistent
>>>>>
>>>>>>basis for recording and mixing from which users can make all
>>>>>>of the decisions as to how the final product will sound and not be
>>>>>>required
>>>>>
>>>>>>to decide when purchasing a pricey console, and have to
>>>>>>focus their business on clients who want "that sound". I believe we
>are
>>>>>
>>>>>>actually closer to the pure definition of recording now than
>>>>>>we once were.
>>>>>>
>>>>>>Regards,
>>>>>>Dedric
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> I the answer is yes, then,the real task is to discover or rather
>>>>>>> un-cover
>>>>>>> what's say: Motu's vision of summing, versus Digidesign, versus
>>>>>>> Steinberg
>>>>>>> and so on..
>>>>>>>
>>>>>>> What's under the hood. To me and others,when Digi re-coded their
>>>>>>> summing
>>>>>>> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
>>>>>
>>>>>>> bump.
>>>>>>> Where as Steinberg's summing is very neutral.
>>>>>>>
>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>Hi Neil,
>>>>>>>>
>>>>>>>>Jamie is right. And you aren't wacked out - you are thinking this
>>>>>>>>through
>>>>>>>
>>>>>>>>in a reasonable manner, but coming to the wrong
>>>>>>>>conclusion - easy to do given how confusing digital audio can be.
>
>>>>>>>>Each
>>>>>>> word
>>>>>>>>represents an amplitude
>>>>>>>>point on a single curve that is changing over time, and can vary
with
>>>>> a
>>>>>>>
>>>>>>>>speed up to the Nyquist frequency (as Jamie described).
>>>>>>>>The complex harmonic content we hear is actually the frequency
>>>>>>>>modulation
>>>>>>> of
>>>>>>>>a single waveform,
>>>>>>>>that over a small amount of time creates the sound we translate -
>we
>>>
>>>>>>>>don't
>>>>>>>
>>>>>>>>really hear a single sample at a time,
>>>>>>>>but thousands of samples at a time (1 sample alone could at most
>>>>>>>>represent
>>>>>>> a
>>>>>>>>single positive or negative peak
>>>>>>>>of a 22,050Hz waveform).
>>>>>>>>
>>>>>>>>If one bit doesn't cancel, esp. if it's a higher order bit than number
>>>>> 24,
>>>>>>>
>>>>>>>>you may hear, and will see that easily,
>>>>>>>>and the higher the bit in the dynamic range (higher order) the more
>>>>>>>>audible
>>>>>>>
>>>>>>>>the difference.
>>>>>>>>Since each bit is 6dB of dynamic range, you can extrapolate how "loud"
>>>>>
>>>>>>>>that
>>>>>>>
>>>>>>>>bit's impact will be
>>>>>>>>if there is a variation.
>>>>>>>>
>>>>>>>>Now, obviously if we are talking about 1 sample in a 44.1k rate song,
>>>>> then
>>>>>>>
>>>>>>>>it simply be a
>>>>>>>>click (only audible if it's a high enough order bit) instead of an
>>>>>>>>obvious
>>>>>>>
>>>>>>>>musical difference, but that should never
>>>>>>>>happen in a phase cancellation test between identical files higher
>
>>>>>>>>than
>>>>>>> bit
>>>>>>>>24, unless there are clock sync problems,
>>>>>>>>driver issues, or the DAW is an early alpha version. :-)
>>>>>>>>
>>>>>>>>By definition of what DAWs do during playback and record, every audio
>>>>>
>>>>>>>>stream
>>>>>>>
>>>>>>>>has the same point in time (judged by the timeline)
>>>>>>>>played back sample accurately, one word at a time, at whatever sample
>>>>>
>>>>>>>>rate
>>>>>>>
>>>>>>>>we are using. A phase cancellation test uses that
>>>>>>>>fact to compare two audio files word for word (and hence bit for
bit
>>>
>>>>>>>>since
>>>>>>>
>>>>>>>>each bit of a 24-bit word would
>>>>>>>>be at the same bit slot in each 24-bit word). Assuming they are
>>>>>>>>aligned
>>>>>>> to
>>>>>>>>the same start point, sample
>>>>>>>>accurately, and both are the same set of sample words at each sample
>>>>>>>>point,
>>>>>>>
>>>>>>>>bit for bit, and one is phase inverted,
>>>>>>>>they will cancel through all 24 bits. For two files to cancel
>>>>>>>>completely
>>>>>>>
>>>>>>>>for the duration of the file, each and every bit in each word
>>>>>>>>must be the exact opposite of that same bit position in a word at
>the
>>>>> same
>>>>>>>
>>>>>>>>sample point. This is why zooming in on an FFT
>>>>>>>>of the full difference file is valuable as it can show any differences
>>>>> in
>>>>>>>
>>>>>>>>the lower order bits that wouldn't be audible. So even if
>>>>>>>>there is no audible difference, the visual followup will show if
the
>>> two
>>>>>>>
>>>>>>>>files truly cancel even a levels below hearing, or
>>>>>>>>outside of a frequency change that we will perceive.
>>>>>>>>
>>>>>>>>When they don't cancel, usually there will be way more than 1 bit
>>>>>>>>difference - it's usually one or more bits in the words for
>>>>>>>>thousands of samples. From a musical standpoint this is usually
in
>>> a
>>>>>>>>frequency range (low freq, or high freq most often) - that will
>>>>>>>>show up as the difference between them, and that usually happens
due
>>> to
>>>>>>> some
>>>>>>>>form of processing difference between the files,
>>>>>>>>such as EQ, compression, frequency dependant gain changes, etc. That
>>> is
>>>>>>> what
>>>>>>>>I believe you are thinking through, but when
>>>>>>>>talking about straight summing with no gain change (or known equal
>
>>>>>>>>gain
>>>>>>>
>>>>>>>>changes), we are only looking at linear, one for one
>>>>>>>>comparisons between the two files' frequency representations.
>>>>>>>>
>>>>>>>>Regards,
>>>>>>>>Dedric
>>>>>>>>
>>>>>>>>> Neil wrote:
>>>>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>>>> The tests I did were completely blank down to -200 dB (far below
>>> the
>>>>>>>
>>>>>>>>>>> last
>>>>>>>>>>
>>>>>>>>>>> bit). It's safe to say there is no difference, even in
>>>>>>>>>>> quantization noise, which by technical rights, is considered
below
>>>>> the
>>>>>>>
>>>>>>>>>>> level
>>>>>>>>>>
>>>>>>>>>>> of "cancellation" in such tests.
>>>>>>>>>>
>>>>>>>>>> I'm not necessarily talking about just the first bit or the
>>>>>>>>>> last bit, but also everything in between... what happens on bit
>>>>>>>>>> #12, for example? Everything on bit #12 should be audible, but
>>>>>>>>>> in an a/b test what if thre are differences in what bits #8
>>>>>>>>>> through #12 sound like, but the amplutide is stll the same on
>>>>>>>>>> both files at that point, you'll get a null, right? Extrapolate
>>>>>>>>>> that out somewhat & let's say there are differences in bits #8
>>>>>>>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>>>>>>>> etc through 43,972... Now this is breaking things down well
>>>>>>>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>>>>>>>> dn't know of any way we could extract JUST that information
>>>>>>>>>> from each file & play it back for an a/b test; but would not
>>>>>>>>>> that be enough to have to "null-able" files that do actually
>>>>>>>>>> sound somewhat different?
>>>>>>>>>>
>>>>>>>>>> I guess what I'm saying is that since each sample in a musical
>>>>>>>>>> track or full song file doesn't represent a pure, simple set of
>>>>>>>>>> content like a sample of a sine wave would - there's a whole
>>>>>>>>>> world of harmonic structure in each sample of a song file, and
>>>>>>>>>> I think (although I'll admit - I can't "prove") that there is
>>>>>>>>>> plenty of room for some variables between the first bit & the
>>>>>>>>>> last bit while still allowing for a null test to be successful.
>>>>>>>>>>
>>>>>>>>>> No? Am I wacked out of my mind?
>>>>>>>>>>
>>>>>>>>>> Neil
>>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>>
>
|
|
|
Re: (No subject)...What's up inder the hood? [message #77360 is a reply to message #77343] |
Sat, 23 December 2006 09:39 |
Dedric Terry
Messages: 788 Registered: June 2007
|
Senior Member |
|
|
I was part of that thread (kdm) and did those tests - I actually took them
a step further than Jake or Fredo. As you can see I incorrectly thought
there was something in the group summing process, but it was just my boneheaded
interpretation of output data (using a small sample section for FFT rather
than the full file mainly). :-((
What Fredo is talking about is when you go over 0dBFS what happens to the
"over" data, and the references to truncation are in that case, which isn't
normal for mixing. This is the same decision every native DAW developer
has to make.
We were actually discussing what happens when you sum to a group vs. summing
to the main bus, without overs. I did my test with all files summing to
-20dB, so there was no chance of pushing the upper limits of 32-bit float's
truncation back down to 24-bits. And I actually simplified it by using two
copies of the same file (just as Fredo did), one phase inverted, both sample
aligned. They cancelled to below 24 bits just as expected, and just as they
should. The variations below 24 bits that I saw (and thought were above
24-bits at one point) are correlation of lower frequencies when gain and
equivalent reduction are introduced (which is what Chuck stated that Paris
does up front on every track). That really doesn't impact the audio itself
since data below -136dB is quantization noise for 24-bit audio.
Sonar, Nuendo, Cubase 4 and Sequoia all behaved exactly the same way in this
test - which tells me they are handling the LSB's the same way. When data
is summed to groups, there will be quantization noise below -136dB. This
is completely normal for any native DAW and they all are subject to it.
As you might read in the thread my conclusion was that we proved digital
audio theory exists - e.g. no uncharted territory, no digital audio frontiers,
no bugs in Nuendo. yeeha. But that's what I get for second guessing talented
developers. ;-)
Fwiw, to take it a step further, Samplitude/Sequoia and Nuendo handle overs,
or "into the red" identically. I checked that too a while back after the
reports of extra headroom, etc in Samplitude. Believe me, I've tried hard
to find where any differences might appear, not just noticeable differences,
but any differences at the lowest levels, but it seems the major native DAW
players are making the same decisions when it comes to truncation, etc, and
there really aren't that many to make. In my tests, dither really wasn't
an issue (I turned it off in all DAWs I tested just to test with pure truncation).
Regards,
Dedric
"LaMOnt" <jjdpro@ameritech.net> wrote:
>
>Dedric, check out this post from our dear friend Fredo: Neundo Moderator:
>Explaining how Steingberg's audio engine works. Note the trade-offs..Meaning,
>Steinberg's way of coding an audio-engine 32bit float is different than
say
>Magix Samplitude:
>
>Fredo
>Administrative Moderator
>
>
>Joined: 29 Dec 2004
>Posts: 4213
>Location: Belgium
> Posted: Fri Dec 08, 2006 2:33 pm Post subject:
>
> ------------------------------------------------------------ --------------------
>
>I think I see where the problem is.
>In my scenario's I don't have any track that goes over 0dBfs, but I have
>always lowered one channel to compensate with another.
>So, I never whent over the 0dB fs limit.
>
>Here's the explanation:
>
>As soon as you go over 0dB, technically you are entering the domain of distortion.
>
>In a 32bit FP mixer, that is not the case since there is unlimited headroom.
>
>
>Now follow me step by step please - read this slow and make sure you understand
>-
>
>At the end of each "stage", there is an adder (a big calculator) which adds
>all the numbers from the individual tracks that are routed to this "adder".
>
>The numbers are kept in the 80-bit registers and then brought back to 32bit
>float.
>This process of bringing back the numbers from 80-bit (and more) to 32bit
>is kept to an absolute minimum.
>This adding/bringing back to 32bit is done at 3 places: After a plugin slot
>(VST-specs for all plugin manufacturers) - Group Tracks and Master Tracks.
>
>
>Now, as soon as you boost the volume above 0dB, you get more than 32bits.
>Stay below 0dB and you will stay below 32 bits.
>When the adders dump their results, the numbers are brought back from any
>number of bits (say 60bit) to 32 bit float.
>These numbers are simply truncated which results in distortion; that's the
>noise/residue you find way down low.
>There is an algortithm that protects us from additive errors - so these
errors
>can never come into the audible range.
>So, as soon as you go over 0dB, you will see these kind of artifacts.
>
>It is debatable if this needs to be dithered or not. The problem -still
is-
>that it is very difficult to dither in a Floating Point environment.
>Fact remains that the error shouldn't be bigger than 2 to 3 LSB's.
>
>Is this a problem?
>In real world applictations: NO.
>In scientific -unrealistic- tests (forcing the erro ): YES.
>
>The alternative is having a Fixed point mixer, where you already would be
>in trouble as soon as you boost one channel over 0dBfs. (or merge two files
>that are @ 0dB)
>Also, this problem will be pretty much gone as soon as we switch to the
64
>bit engine.
>
>
>For the record, the test where Jake hears "music" as residue must be flawed.
>You should hear noise/distortion from square waves.
>
>HTH
>
>Fredo
>
>
>
>
>
>"Dedric Terry" <dedric@echomg.com> wrote:
>>I can't tell you why you hear ProTools differently than Nuendo using a
>>single file.
>>There isn't any voodoo in the software, or hidden character enhancing dsp.
>
>>I'll see if
>>I can round up an M-Powered system to compare with next month.
>>
>>For reference, everytime I open Sequoia I think I might hear a broader,
>
>>clean,
>>and almost flat (spectrum, not depth) sound, but I don't - it's the same
>as
>>Nuendo, fwiw.
>>Also I don't think what I was referring to was a theory from Chuck - I
>
>>believe that was what he
>>discovered in the code.
>>
>>Digital mixers all have different preamps and converters. Unless you are
>
>>bypassing every
>>EQ and converter and going digital in and out to the same converter when
>
>>comparing, it would be hard
>>to say the mix engine itself sounds different than another mixer, but taken
>
>>as a whole, then
>>certainly they may very well sound different. In addition, hardware digital
>>mixers may use a variety of different paths between the I/O, channel
>>processing, and summing,
>>though most are pretty much software mixers on a single chip or set of
dsps
>
>>similar to ProTools,
>>with I/O and a hardware surface attached.
>>
>>I know it may be hard to separate the mix engine as software in either
a
>
>>native DAW
>>or a digital mixer, from the hardware that translates the audio to something
>
>>we hear,
>>but that's what is required when comparing summing. The hardware can
>>significantly change
>>what we hear, so comparing digital mixers really isn't of as much interest
>
>>as comparing native
>>DAWs in that respect - unless you are looking to buy one of course.
>>
>>Even though I know you think manufacturers are trying to add something
to
>
>>give them an edge, I am 100%
>>sure that isn't the case - rather they are trying to add or change as little
>
>>as possible in order to give
>>them the edge. Their end of digital audio isn't about recreating the past,
>
>>but improving upon it.
>>As we've discussed and agreed before, the obsession with recreating
>>"vintage" technology is as much
>>fad as it is a valuable creative asset. There is no reason we shouldn't
>
>>have far superior hardware and software EQs and comps
>>than 20, 30 or 40 years ago. No reason at all, other than market demand,
>
>>but the majority of software, and new
>>hardware gear on the market has a vintage marketing tagline with it.
>>Companies will sell any bill of
>>goods if customers will buy it.
>>
>>There's nothing unique about the summing in Nuendo, Cubase, Sequoia/Samp,
>>or Sonar, and it's pretty safe to include Logic and DP in that list as
well.
>
>>One of the reasons I test
>>these things is to be sure my DAW isn't doing something wrong, or something
>
>>I don't know about.
>>
>>Vegas - I use it for video conversions and have never done any critical
>
>>listening tests with it. What I have heard
>>briefly didn't sound any different. It certainly looks plain vanilla
>>though. What you are describing is exactly
>>what I would say about the GUIs of each of those apps, not that it means
>
>>anything. Just interesting.
>>
>>That's one reason I listen eyes closed and double check with phase
>>cancellation tests and FFTs - I am
>>influenced creatively by the GUI to some degree. I actually like Cubase
>4's
>>GUI better than Nuendo 3.2,
>>though there are only slight visual differences (some workflow differences
>
>>are a definite improvement for me though).
>>
>>ProTools' GUI always made me want to write one dimensional soundtracks
in
>
>>mono for public utilities, accounting offices
>>or the IRS while reading my discreet systems analysis textbook - it was
>also
>>grey. ;-)
>>
>>Regards,
>>Dedric
>>
>>"LaMont" <jjdpro@ameritech.net> wrote in message news:458c82fd$1@linux...
>>>
>>> Dedric, my simple test is simple..
>>> Using the same audio interface, with the same stereo file..null-ed to
>
>>> zero..No
>>> eq, for fx. Master fader on zero..
>>>
>>> Nuendo, Pro-Tools -Mpowered(native)... yields a sonic difference that
>I
>>> have
>>> referenced before.. The sound coming from PT-M has a nice top end , where
>>> as Neundo has a nice flatter sound quality.
>>> Same audio interface. M-audio 410..Using Mackies & Blue-Sky pro monitors..
>>>
>>> Same test at the big room..PT-HD & Neundo Logic Audio(macG5-Dual) Using
>
>>> the
>>> 192 interface.
>>> Same results..But adding Logic audio's sound ..(Broad, thick)
>>>
>>> Somethings going on.
>>>
>>> Chucks post about how paris handles audio is a theory..Only Edmund can
>
>>> truly
>>> give us the goods on what's really what..
>>>
>>> I disagree that manufactuers don;t set out o put a sonic print on their
>
>>> products.
>>> I think they do.
>>>
>>> I have been fortunate to work on some digital mixers and I can tell you
>
>>> that
>>> each one has their own sound. The Sony Dmx-100 was modeled after SSL
4000g
>>> (like it's Big Brother).And you what? That board (Dmx-100) sound very
>warm
>>> and it's eq tries to behave and sound just like an SSL.. Unlike he Yamaha
>>> Dm2000(version 1.x) which has a very Clean, neutral sound..However, some
>>> complained that it was tooo Vanila and thus, Yamaha add a version 2.0
>
>>> which
>>> added Vintage type Eq's, modeled analog input gain saturation fx too
give
>>> the user a choice Btw Clean and Neutral vs sonic Character.
>>>
>>> So, if digital conoles can be given a sonic character, why not a software
>>> mixer?
>>> The truth is, there are some folks who want a neutral mixer and then
there
>>> are others who want a sonic footprint imparted. and these can be coded
>in
>>> the digital realm.
>>> The apllies with the manufactuers. They too have their vision on what
>They
>>> think and want their product to sound.
>>>
>>> I love reading on gearslutz the posts from Plugin developers and their
>
>>> interpretations
>>> and opinions about what makes their Neve 1073 Eq better and what goes
>into
>>> making their version sound like it does.. Each Developer has a different
>>> vision as to what the Neve 1073 should sound like. And yet they all sound
>>> good , but slightly different.
>>>
>>> You stated that you use Vegas. Well as you know, Vegas has a very generic
>>> sound..Just plain and simple. But, i bet you can tell the difference
>on
>>> your system when you play that same file in Neundo (No, fx, eq,
>>> null-edzerro)..
>>> ???
>>>
>>>
>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>Lamont - what is the output chain you are using for each app when
>>>>comparing
>>>
>>>>the file in Nuendo
>>>>vs ProTools? On the same PC, I presume (and is this PT HD or M-Powered?)?
>>>>Since these can't use the same output driver, you would have to depend
>on
>>>
>>>>the D/A being
>>>>the same, but clocking will be different unless you have a master clock,
>>> and
>>>>both interfaces
>>>>are locking with the same accuracy. This was one of the issues that
came
>>> up
>>>>at Lynn Fuston's
>>>>D/A converter shootout - when do you lock to external clock and incur
>the
>>>
>>>>resulting jitter,
>>>>and when do you trust the internal clock - and if you do lock externally,
>>>
>>>>how good is the PLL
>>>>in the slave device? These issues can cause audible changes in the top
>>> end
>>>>that have nothing to do
>>>>with the software itself. If you say that PTHD through the same converter
>>>
>>>>output as Nuendo (via? RME?
>>>>Lynx?) using the same master clock, sounds different playing a single
>
>>>>audio
>>>
>>>>file, then I take your word
>>>>for it. I can't tell you why that is happening - only that an audible
>>>>difference really shouldn't happen due
>>>>to the software alone - not with a single audio file, esp. since I've
>
>>>>heard
>>>
>>>>and seen PTHD audio cancel with
>>>>native DAWs. Just passing a single 16 or 24 bit track down the buss
>to
>>> the
>>>>output driver should
>>>>be, and usually is, completely transparent, bit for bit.
>>>>
>>>>The same audio file played through the same converters should only sound
>>>
>>>>different if something in
>>>>the chain is different - be it clocking, gain or some degree of
>>>>unintended,
>>>
>>>>errant dsp processing. Every DAW should
>>>>pass a single audio file without altering a single bit. That's a basic
>
>>>>level
>>>
>>>>of accuracy we should always
>>>>expect of any DAW. If that accuracy isn't there, you can be sure a heavy
>>>
>>>>mix will be altered in ways you
>>>>didn't intend, even though you would end up mixing with that factor in
>
>>>>place
>>>
>>>>(e.g. you still mix for what
>>>>you want to hear regardless of what the platform does to each audio track
>>> or
>>>>channel).
>>>>
>>>>In fact you should be able to send a stereo audio track out SPDIF or
>>>>lightpipe to another DAW, record it
>>>>bring the recorded file back in, line them up to the first bit, and have
>>>
>>>>them cancel on and inverted phase
>>>>test. I did this with Nuendo and Cubase 4 on separate machines just
to
>>> be
>>>>sure my master clocking and
>>>>slave sync was accurate - it worked perfectly.
>>>>
>>>>Also be sure there isn't a variation in the gain even by 0.1 dB between
>>> the
>>>>two. There shouldn't
>>>>and I wouldn't expect there to be one. Also could PT be set for a
>>>>different
>>>
>>>>pan law? Shouldn't make a
>>>>difference even if comparing two mono panned files to their stereo
>>>>interleaved equivalent, but for sake
>>>>of completeness it's worth checking as well. A variation in the output
>>>
>>>>chain, be it drivers, audio card
>>>>card, or converters would be the most likely culprit here.
>>>>
>>>>The reason DAW manufacturers wouldn't add any sonic "character"
>>>>intentionally is that the
>>>>ultimate goal from day one with recording has been to accurately reproduce
>>>
>>>>what we hear.
>>>>We developed a musical penchant for sonic character because the hardware
>>>
>>>>just wasn't accurate,
>>>>and what it did often sent us down new creative paths - even if by force
>>> -
>>>>and we decided it was
>>>>preferred that way.
>>>>
>>>>Your point about what goes into the feature presets to sell synths is
>
>>>>right
>>>
>>>>for sure, but synths are about
>>>>character and getting that "perfect piano" or crystal clear bell pad,
>or
>>> fat
>>>>punchy bass without spending
>>>>a mint on development, adding 50G onboard sample libraries, or costing
>
>>>>$15k,
>>>
>>>>so what they
>>>>lack in actual synthesis capabilities, they make up with EQ and effects
>>> on
>>>>the output. That's been the case
>>>>for years, at least since we had effects on synths at least. But even
>
>>>>with
>>>
>>>>modern synths such as the Fantom,
>>>>Tritons, etc, which are great synths all around, of course the coolest,
>>>
>>>>widest and biggest patches
>>>>will make the biggest impression - so in come the EQs, limiters, comps,
>>>
>>>>reverbs, chorus, etc. The best
>>>>way to find out if a synth is really good is to bypass all effects and
>see
>>>
>>>>what happens. Most are pretty
>>>>good these days, but about half the time, there are presets that fall
>>>>completely flat in fx bypass.
>>>>
>>>>DAWs aren't designed to put a sonic fingerprint on a sound the way synths
>>>
>>>>are - they are designed
>>>>to *not* add anything - to pass through what we create as users, with
>no
>>>
>>>>alteration (or as little as possible)
>>>>beyond what we add with intentional processing (EQ, comps, etc).
>>>>Developers
>>>
>>>>would find no pride
>>>>in hearing that their DAW sounds anything different than whatever is
being
>>>
>>>>played back in it,
>>>>and the concept is contrary to what AES and IEEE proceedings on the issue
>>>
>>>>propose in general
>>>>digital audio discussions, white papers, etc.
>>>>
>>>>What ID ended up doing with Paris (at least from what I gather per Chuck's
>>>
>>>>findings - so correct me if I'm missing part of the equation Chuck),
>>>>is drop the track gain by 20dB or so, then added it back at the master
>
>>>>buss
>>>
>>>>to create the effect of headroom (probably
>>>>because the master buss is really summing on the card, and they have
more
>>>
>>>>headroom there than on the tracks
>>>>where native plugins might be used). I don't know if Paris passed 32-bit
>>>
>>>>float files to the EDS card, but sort of
>>>>doubt it. I think Chuck has clarified this at one point, but don't recall
>>>
>>>>the answer.
>>>>
>>>>Also what Paris did is use a greater bit depth on the hardware than
>>>>ProTools
>>>
>>>>did - at the time PT was just
>>>>bring Mix+ systems to market, or they had been out for a year or two
(if
>>> I
>>>>have my timeline right) - they
>>>>were 24-bit fixed all the way through. Logic and Cubase were native
DAWs,
>>>
>>>>but native was still too slow
>>>>to compete with hardware hybrids. Paris trumped them all by running
>>>>32-bit
>>>
>>>>float natively (not new really, but
>>>>better than sticking to 24-bit) and 56 or so bits in hardware instead
>of
>>>
>>>>going to Motorola DSPs at 24.
>>>>The onboard effects were also a step up from anything out there, so the
>>> demo
>>>>did sound good.
>>>>I don't recall which, but one of the demos, imho, wasn't so good (some
>>>>sloppy production and
>>>>vocals in spots, IIRC), so I only listened to it once. ;-)
>>>>
>>>>Coupled with the gain drop and buss makeup, this all gave it a "headroom"
>>> no
>>>>one else had. With very nice
>>>>onboard effects, Paris jumped ahead of anything else out there easily,
>and
>>>
>>>>still respectably holds its' own today
>>>>in that department.
>>>>
>>>>Most demos I hear (when I listen to them) vary in quality, usually not
>so
>>>
>>>>great in some area. But if a demo does
>>>>sound great, then it at least says that the product is capable of at
>
>>>>least
>>>
>>>>that level of performance, and it can
>>>>only help improve a prospective buyer's impression of it.
>>>>
>>>>Regards,
>>>>Dedric
>>>>
>>>>"LaMont " <jjdpro@ameritech.net> wrote in message news:458c14c0$1@linux...
>>>>>
>>>>> Dedric good post..
>>>>>
>>>>> However, I have PT-M-Powered/M-audio 410 interface for my laptop and
>it
>>>
>>>>> has
>>>>> that same sound (no eq, zero fader) that HD does. I know their use
the
>>>
>>>>> same
>>>>> 48 bit fix mixer. I load up the same file in Nuendo (no eq, zero
>>>>> fader)..results.
>>>>> different sonic character.
>>>>>
>>>>> PT having a top end touch..Nuendo, nice smooth(flat) sound. And I'm
>just
>>>>> taking about a stereo wav file nulled with no eq..nothing
>>>>> ..zilch..nada..
>>>>>
>>>>> Now, there are devices (keyboards, dum machines) on the market today
>
>>>>> that
>>>>> have a Master Buss Compressor and EQ set to on with the top end notched
>>>
>>>>> up.
>>>>> Why? because it gives their product an competitive advantageover the
>>>>> competition..
>>>>> Ex: Yahama's Motif ES, Akai's MPC 1000, 2500, Roland's Fantom.
>>>>>
>>>>> So, why would'nt a DAW manufactuer code in an extra (ooommf) to make
>
>>>>> their
>>>>> DAW sound better. Especially, given the "I hate Digtal Summing" crowd?
>>>
>>>>> And,
>>>>> If I'm a DAW manufactuer, what would give my product a sonic edge over
>>> the
>>>>> competition?
>>>>>
>>>>> We live in the "louder is better" audio world these days, so a DAW
that
>>>
>>>>> can
>>>>> catch my attention 'sonically" will probaly will get the sell. That's
>>> what
>>>>> happend to me back in 1997 when I heard Paris. I was floored!!! Still
>>> to
>>>>> this day, nothing has floored me like that "Road House Blues Demo"
I
>
>>>>> heard
>>>>> on Paris.
>>>>>
>>>>> Was it the hardware ? was it the software. I remember talking with
>>>>> Edmund
>>>>> at the 2000 winter Namm, and told me that he & Steve set out to
>>>>> reproduce
>>>>> the sonics of big buck analog board (eq's) and all.. And, summing was
>>> a
>>>>> big
>>>>> big issue for them because they (ID) thought that nobody has gotten
>>>>> it(summing)
>>>>> right. And by right, they meant, behaved like a console with a wide
>lane
>>>>> for all of those tracks..
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>"LaMont" <jjdpro@ameritech.net> wrote in message
>>>>>>news:458be8d5$1@linux...
>>>>>>>
>>>>>>> Okay...
>>>>>>> I guess what I'm saying is this:
>>>>>>>
>>>>>>> -Is it possible that diferent DAW manufactuers "code" their app
>>>>>>> differently
>>>>>>> for sound results.
>>>>>>
>>>>>>Of course it is *possible* to do this, but only if the DAW has a
>>>>>>specific
>>>>>
>>>>>>sound shaping purpose
>>>>>>beyond normal summing/mixing. Users talk about wanting developers
to
>>> add
>>>>> a
>>>>>>"Neve sound" or "API sound" option to summing engines,
>>>>>>but that's really impractical given the amount of dsp required to make
>>> a
>>>>>
>>>>>>decent emulation (with convolution, dynamic EQ functions,
>>>>>>etc). For sake of not eating up all cpu processing, that could likely
>>>
>>>>>>only
>>>>>
>>>>>>surface as is a built in EQ, which
>>>>>>no one wants universally in summing, and anyone can add at will already.
>>>>>>
>>>>>>So it hasn't happened yet and isn't likely to as it detours from the
>
>>>>>>basic
>>>>>
>>>>>>tenant of audio recording - recreate what comes in as
>>>>>>accurately as possible.
>>>>>>
>>>>>>What Digi did in recoding their summing engine was try to recover some
>>>>>>of the damage done by the 24-bit buss in Mix systems. Motorola 56k
dsps
>>>>> are
>>>>>>24-bit fixed point chips and I think
>>>>>>the new generation (321?) still is, but they use double words now for
>>>>>>48-bits). And though plugins could process at 48-bit by
>>>>>>doubling up and using upper and lower 24-bit words for 48-bit outputs,
>>> the
>>>>>
>>>>>>buss
>>>>>>between chips was 24-bits, so they had to dither to 24-bits after every
>>>>>
>>>>>>plugin. The mixer (if I recall correctly) also
>>>>>>had a 24-bit buss, so what Digi did is to add a dither stage to the
>
>>>>>>mixer
>>>>> to
>>>>>>prevent this
>>>>>>constant truncation of data. 24-bits isn't enough to cover summing
>for
>>>>> more
>>>>>>than a few tracks without
>>>>>>losing information in the 16-bit world, and in the 24-bit world some
>>>>>>information will be lost, at least at the lowest levels.
>>>>>>
>>>>>>Adding a dither stage (though I think they did more than that - perhaps
>>>>>
>>>>>>implement a 48-bit double word stage as well),
>>>>>>simply smoothed over the truncation that was happening, but it didn't
>>>
>>>>>>solve
>>>>>
>>>>>>the problem, so with HD
>>>>>>they went to a double-word path - throughout I believe, including the
>>> path
>>>>>
>>>>>>between chips. I believe the chips
>>>>>>are still 24-bit, but by doubling up the processing (yes at a cost
of
>>>
>>>>>>twice
>>>>>
>>>>>>the overhead), they get a 48-bit engine.
>>>>>>This not only provided better headroom, but greater resolution. Higher
>>>>> bit
>>>>>>depths subdivide the amplitude with greater resolution, and that's
>>>>>>really where we get the definition of dynamic range - by lowering the
>>>
>>>>>>signal
>>>>>
>>>>>>to quantization noise ratio.
>>>>>>
>>>>>>With DAWs that use 32-bit floating point math all the way through,
the
>>>
>>>>>>only
>>>>>
>>>>>>reason for altering the summing
>>>>>>is by error, and that's an error that would actually be hard to make
>and
>>>>> get
>>>>>>past a very basic alpha stage of testing.
>>>>>>There is a small difference in fixed point math and floating point
math,
>>>>> or
>>>>>>at least a theoretical difference in how it affects audio
>>>>>>in certain cases, but not necessarily in the result for calculating
>gain
>>>>> in
>>>>>>either for the same audio file. Where any differences might show up
>is
>>>>>
>>>>>>complicated, and I believe only appear at levels below 24-bit (or in
>>>>>>headroom with tracks pushed beyond 0dBFS), or when/if
>>>>>>there areany differences in where each amplitude level is quantized.
>>>>>>
>>>>>>Obviously there can be differences if the DAW has to use varying bit
>>>>>>depths
>>>>>
>>>>>>throughout a single summing path to accomodate hardware
>>>>>>as well as software summing, since there may be truncation or rounding
>>>
>>>>>>along
>>>>>
>>>>>>the way, but that impacts the lowest bit
>>>>>>level, and hence - spacial reproduction, reverb tails perhaps, and
>>>>>>"depth",
>>>>>
>>>>>>not the levels most music so the differences are most
>>>>>>often more subtle than not. But most modern DAWs have eliminated those
>>>>>
>>>>>>"rough edges" in the math by increasing the bit depth to accomodate
>
>>>>>>normal
>>>>>
>>>>>>summing required for mixing audio.
>>>>>>
>>>>>>So with Lynn's unity gain summing test (A files on the CD I believe),
>>> DAWs
>>>>>
>>>>>>were never asked to sum beyond 24-bits,
>>>>>>at least not on the upper end of the dynamic range, so everything that
>>>
>>>>>>could
>>>>>
>>>>>>represent 24-bits accurately would cancel. The only ones
>>>>>>that didn't were ones that had a different bit depth and/or gain
>>>>>>structure
>>>>>
>>>>>>whether hybrid or native
>>>>>>(e.g. Paris' subtracting 20dB from tracks and adding it to the buss).
>>> In
>>>>>
>>>>>>this case, PTHD cancelled (when I tested it) with
>>>>>>Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed
>>> vs.
>>>>>
>>>>>>32-bit float wasn't a factor.
>>>>>>
>>>>>>When trying other tests, even when adding and subtracting gain, Nuendo,
>>>>>
>>>>>>Sequoia and Sonar cancel - both audibly and
>>>>>>visually at inaudible levels, which only proves that one isn't making
>>> an
>>>>>
>>>>>>error when calculating basic gain. Since a dB is well defined,
>>>>>>and the math to add gain is simple, they shouldn't. The fact that
they
>>>>> all
>>>>>>use 32-bit float all the way through eliminates a difference
>>>>>>in data structure as well, and this just verifies that. There was
a
>
>>>>>>time
>>>>>
>>>>>>that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor
went,
>>>>>>but it's 32-bit float all the way through now just as Sonar,
>>>>>>Nuendo/Cubase,
>>>>>
>>>>>>Samplitude/Sequoia, DP, Audition (I presume at least).
>>>>>>I don't know what Acid or Live use. Saw promotes a fixed point engine,
>>>>> but
>>>>>>I don't know if it is still 24-bit, or now 48 bit.
>>>>>>That was an intentional choice by the developer, but he's the only
one
>>> I
>>>>>
>>>>>>know of that stuck with 24-bit for summing
>>>>>>intentionally, esp. after the Digi Mix system mixer incident.
>>>>>>
>>>>>>Long answer, but to sum up, it is certainly physically *possible* for
>>> a
>>>>>
>>>>>>developer to code something differently intentionally, but not
>>>>>>in reality likely since it would be breaking some basic fixed point
>or
>>>>>>floating point math rules. Where the differences really
>>>>>>showed up in the past is with PT Mix systems where the limitation was
>>>
>>>>>>really
>>>>>
>>>>>>significant - e.g. 24 bit with truncation at several stages.
>>>>>>
>>>>>>That really isn't such an issue anymore. Given the differences in
>>>>>>workflow,
>>>>>
>>>>>>missing something in workflow or layout differences
>>>>>>is easy enough to do (e.g. Sonar doesn't have group and busses the
way
>>>>>>Nuendo does, as it's outputs are actually driver outputs,
>>>>>>not software busses, so in Sonar, busses are actually outputs, and
sub
>>>>>>busses are actually busses in Nuendo. There are no,
>>>>>>or at least I haven't found the equivalent of a Nuendo group in Sonar
>>> -
>>>>> that
>>>>>>affects the results of some tests (though not basic
>>>>>>summing) if not taken into account, but when taken into account, they
>>> work
>>>>>
>>>>>>exactly the same way).
>>>>>>
>>>>>>So at least when talking about apps with 32-bit float all the way
>>>>>>through,
>>>>>
>>>>>>it's safe to say (since it has been proven) that summing isn't different
>>>>>
>>>>>>unless
>>>>>>there is an error somewhere, or variation in how the user duplicates
>the
>>>>>
>>>>>>same mix in two different apps.
>>>>>>
>>>>>>Imho, that's actually a very good thing - approaching a more consistent
>>>>>
>>>>>>basis for recording and mixing from which users can make all
>>>>>>of the decisions as to how the final product will sound and not be
>>>>>>required
>>>>>
>>>>>>to decide when purchasing a pricey console, and have to
>>>>>>focus their business on clients who want "that sound". I believe we
>are
>>>>>
>>>>>>actually closer to the pure definition of recording now than
>>>>>>we once were.
>>>>>>
>>>>>>Regards,
>>>>>>Dedric
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> I the answer is yes, then,the real task is to discover or rather
>>>>>>> un-cover
>>>>>>> what's say: Motu's vision of summing, versus Digidesign, versus
>>>>>>> Steinberg
>>>>>>> and so on..
>>>>>>>
>>>>>>> What's under the hood. To me and others,when Digi re-coded their
>>>>>>> summing
>>>>>>> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
>>>>>
>>>>>>> bump.
>>>>>>> Where as Steinberg's summing is very neutral.
>>>>>>>
>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>Hi Neil,
>>>>>>>>
>>>>>>>>Jamie is right. And you aren't wacked out - you are thinking this
>>>>>>>>through
>>>>>>>
>>>>>>>>in a reasonable manner, but coming to the wrong
>>>>>>>>conclusion - easy to do given how confusing digital audio can be.
>
>>>>>>>>Each
>>>>>>> word
>>>>>>>>represents an amplitude
>>>>>>>>point on a single curve that is changing over time, and can vary
with
>>>>> a
>>>>>>>
>>>>>>>>speed up to the Nyquist frequency (as Jamie described).
>>>>>>>>The complex harmonic content we hear is actually the frequency
>>>>>>>>modulation
>>>>>>> of
>>>>>>>>a single waveform,
>>>>>>>>that over a small amount of time creates the sound we translate -
>we
>>>
>>>>>>>>don't
>>>>>>>
>>>>>>>>really hear a single sample at a time,
>>>>>>>>but thousands of samples at a time (1 sample alone could at most
>>>>>>>>represent
>>>>>>> a
>>>>>>>>single positive or negative peak
>>>>>>>>of a 22,050Hz waveform).
>>>>>>>>
>>>>>>>>If one bit doesn't cancel, esp. if it's a higher order bit than number
>>>>> 24,
>>>>>>>
>>>>>>>>you may hear, and will see that easily,
>>>>>>>>and the higher the bit in the dynamic range (higher order) the more
>>>>>>>>audible
>>>>>>>
>>>>>>>>the difference.
>>>>>>>>Since each bit is 6dB of dynamic range, you can extrapolate how "loud"
>>>>>
>>>>>>>>that
>>>>>>>
>>>>>>>>bit's impact will be
>>>>>>>>if there is a variation.
>>>>>>>>
>>>>>>>>Now, obviously if we are talking about 1 sample in a 44.1k rate song,
>>>>> then
>>>>>>>
>>>>>>>>it simply be a
>>>>>>>>click (only audible if it's a high enough order bit) instead of an
>>>>>>>>obvious
>>>>>>>
>>>>>>>>musical difference, but that should never
>>>>>>>>happen in a phase cancellation test between identical files higher
>
>>>>>>>>than
>>>>>>> bit
>>>>>>>>24, unless there are clock sync problems,
>>>>>>>>driver issues, or the DAW is an early alpha version. :-)
>>>>>>>>
>>>>>>>>By definition of what DAWs do during playback and record, every audio
>>>>>
>>>>>>>>stream
>>>>>>>
>>>>>>>>has the same point in time (judged by the timeline)
>>>>>>>>played back sample accurately, one word at a time, at whatever sample
>>>>>
>>>>>>>>rate
>>>>>>>
>>>>>>>>we are using. A phase cancellation test uses that
>>>>>>>>fact to compare two audio files word for word (and hence bit for
bit
>>>
>>>>>>>>since
>>>>>>>
>>>>>>>>each bit of a 24-bit word would
>>>>>>>>be at the same bit slot in each 24-bit word). Assuming they are
>>>>>>>>aligned
>>>>>>> to
>>>>>>>>the same start point, sample
>>>>>>>>accurately, and both are the same set of sample words at each sample
>>>>>>>>point,
>>>>>>>
>>>>>>>>bit for bit, and one is phase inverted,
>>>>>>>>they will cancel through all 24 bits. For two files to cancel
>>>>>>>>completely
>>>>>>>
>>>>>>>>for the duration of the file, each and every bit in each word
>>>>>>>>must be the exact opposite of that same bit position in a word at
>the
>>>>> same
>>>>>>>
>>>>>>>>sample point. This is why zooming in on an FFT
>>>>>>>>of the full difference file is valuable as it can show any differences
>>>>> in
>>>>>>>
>>>>>>>>the lower order bits that wouldn't be audible. So even if
>>>>>>>>there is no audible difference, the visual followup will show if
the
>>> two
>>>>>>>
>>>>>>>>files truly cancel even a levels below hearing, or
>>>>>>>>outside of a frequency change that we will perceive.
>>>>>>>>
>>>>>>>>When they don't cancel, usually there will be way more than 1 bit
>>>>>>>>difference - it's usually one or more bits in the words for
>>>>>>>>thousands of samples. From a musical standpoint this is usually
in
>>> a
>>>>>>>>frequency range (low freq, or high freq most often) - that will
>>>>>>>>show up as the difference between them, and that usually happens
due
>>> to
>>>>>>> some
>>>>>>>>form of processing difference between the files,
>>>>>>>>such as EQ, compression, frequency dependant gain changes, etc. That
>>> is
>>>>>>> what
>>>>>>>>I believe you are thinking through, but when
>>>>>>>>talking about straight summing with no gain change (or known equal
>
>>>>>>>>gain
>>>>>>>
>>>>>>>>changes), we are only looking at linear, one for one
>>>>>>>>comparisons between the two files' frequency representations.
>>>>>>>>
>>>>>>>>Regards,
>>>>>>>>Dedric
>>>>>>>>
>>>>>>>>> Neil wrote:
>>>>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>>>> The tests I did were completely blank down to -200 dB (far below
>>> the
>>>>>>>
>>>>>>>>>>> last
>>>>>>>>>>
>>>>>>>>>>> bit). It's safe to say there is no difference, even in
>>>>>>>>>>> quantization noise, which by technical rights, is considered
below
>>>>> the
>>>>>>>
>>>>>>>>>>> level
>>>>>>>>>>
>>>>>>>>>>> of "cancellation" in such tests.
>>>>>>>>>>
>>>>>>>>>> I'm not necessarily talking about just the first bit or the
>>>>>>>>>> last bit, but also everything in between... what happens on bit
>>>>>>>>>> #12, for example? Everything on bit #12 should be audible, but
>>>>>>>>>> in an a/b test what if thre are differences in what bits #8
>>>>>>>>>> through #12 sound like, but the amplutide is stll the same on
>>>>>>>>>> both files at that point, you'll get a null, right? Extrapolate
>>>>>>>>>> that out somewhat & let's say there are differences in bits #8
>>>>>>>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>>>>>>>> etc through 43,972... Now this is breaking things down well
>>>>>>>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>>>>>>>> dn't know of any way we could extract JUST that information
>>>>>>>>>> from each file & play it back for an a/b test; but would not
>>>>>>>>>> that be enough to have to "null-able" files that do actually
>>>>>>>>>> sound somewhat different?
>>>>>>>>>>
>>>>>>>>>> I guess what I'm saying is that since each sample in a musical
>>>>>>>>>> track or full song file doesn't represent a pure, simple set of
>>>>>>>>>> content like a sample of a sine wave would - there's a whole
>>>>>>>>>> world of harmonic structure in each sample of a song file, and
>>>>>>>>>> I think (although I'll admit - I can't "prove") that there is
>>>>>>>>>> plenty of room for some variables between the first bit & the
>>>>>>>>>> last bit while still allowing for a null test to be successful.
>>>>>>>>>>
>>>>>>>>>> No? Am I wacked out of my mind?
>>>>>>>>>>
>>>>>>>>>> Neil
>>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>>
>
|
|
|
Re: (No subject)...What's up inder the hood? [message #77367 is a reply to message #77352] |
Sat, 23 December 2006 13:56 |
LaMont
Messages: 828 Registered: October 2005
|
Senior Member |
|
|
Got it..But, I can remember Edmund and Steve saying that Summing was a top
priority..
"chuck duffy" <c@c.com> wrote:
>
>Hi Lamont,
>
>I've posted this several times in the past, but here's the scoop. Edmund
>did not write the summing code. It's deep within the DSP code running on
>the ESP2 chips. It was written by some very talented guys at Ensoniq. I
>really dig everything that Edmund and Stephen did, but the summing just
isn't
>part of it.
>
>The stuff I posted is not really a theory. The PARIS mix engine source
code
>is freely available for download. Anyone with a little time, patience and
>the ESP2 patent can clearly see what is going on. It's only a couple hundred
>lines of code.
>
>Chuck
>
>"Dedric Terry" <dedric@echomg.com> wrote:
>>I can't tell you why you hear ProTools differently than Nuendo using a
>>single file.
>>There isn't any voodoo in the software, or hidden character enhancing dsp.
>
>>I'll see if
>>I can round up an M-Powered system to compare with next month.
>>
>>For reference, everytime I open Sequoia I think I might hear a broader,
>
>>clean,
>>and almost flat (spectrum, not depth) sound, but I don't - it's the same
>as
>>Nuendo, fwiw.
>>Also I don't think what I was referring to was a theory from Chuck - I
>
>>believe that was what he
>>discovered in the code.
>>
>>Digital mixers all have different preamps and converters. Unless you are
>
>>bypassing every
>>EQ and converter and going digital in and out to the same converter when
>
>>comparing, it would be hard
>>to say the mix engine itself sounds different than another mixer, but taken
>
>>as a whole, then
>>certainly they may very well sound different. In addition, hardware digital
>>mixers may use a variety of different paths between the I/O, channel
>>processing, and summing,
>>though most are pretty much software mixers on a single chip or set of
dsps
>
>>similar to ProTools,
>>with I/O and a hardware surface attached.
>>
>>I know it may be hard to separate the mix engine as software in either
a
>
>>native DAW
>>or a digital mixer, from the hardware that translates the audio to something
>
>>we hear,
>>but that's what is required when comparing summing. The hardware can
>>significantly change
>>what we hear, so comparing digital mixers really isn't of as much interest
>
>>as comparing native
>>DAWs in that respect - unless you are looking to buy one of course.
>>
>>Even though I know you think manufacturers are trying to add something
to
>
>>give them an edge, I am 100%
>>sure that isn't the case - rather they are trying to add or change as little
>
>>as possible in order to give
>>them the edge. Their end of digital audio isn't about recreating the past,
>
>>but improving upon it.
>>As we've discussed and agreed before, the obsession with recreating
>>"vintage" technology is as much
>>fad as it is a valuable creative asset. There is no reason we shouldn't
>
>>have far superior hardware and software EQs and comps
>>than 20, 30 or 40 years ago. No reason at all, other than market demand,
>
>>but the majority of software, and new
>>hardware gear on the market has a vintage marketing tagline with it.
>>Companies will sell any bill of
>>goods if customers will buy it.
>>
>>There's nothing unique about the summing in Nuendo, Cubase, Sequoia/Samp,
>>or Sonar, and it's pretty safe to include Logic and DP in that list as
well.
>
>>One of the reasons I test
>>these things is to be sure my DAW isn't doing something wrong, or something
>
>>I don't know about.
>>
>>Vegas - I use it for video conversions and have never done any critical
>
>>listening tests with it. What I have heard
>>briefly didn't sound any different. It certainly looks plain vanilla
>>though. What you are describing is exactly
>>what I would say about the GUIs of each of those apps, not that it means
>
>>anything. Just interesting.
>>
>>That's one reason I listen eyes closed and double check with phase
>>cancellation tests and FFTs - I am
>>influenced creatively by the GUI to some degree. I actually like Cubase
>4's
>>GUI better than Nuendo 3.2,
>>though there are only slight visual differences (some workflow differences
>
>>are a definite improvement for me though).
>>
>>ProTools' GUI always made me want to write one dimensional soundtracks
in
>
>>mono for public utilities, accounting offices
>>or the IRS while reading my discreet systems analysis textbook - it was
>also
>>grey. ;-)
>>
>>Regards,
>>Dedric
>>
>>"LaMont" <jjdpro@ameritech.net> wrote in message news:458c82fd$1@linux...
>>>
>>> Dedric, my simple test is simple..
>>> Using the same audio interface, with the same stereo file..null-ed to
>
>>> zero..No
>>> eq, for fx. Master fader on zero..
>>>
>>> Nuendo, Pro-Tools -Mpowered(native)... yields a sonic difference that
>I
>>> have
>>> referenced before.. The sound coming from PT-M has a nice top end , where
>>> as Neundo has a nice flatter sound quality.
>>> Same audio interface. M-audio 410..Using Mackies & Blue-Sky pro monitors..
>>>
>>> Same test at the big room..PT-HD & Neundo Logic Audio(macG5-Dual) Using
>
>>> the
>>> 192 interface.
>>> Same results..But adding Logic audio's sound ..(Broad, thick)
>>>
>>> Somethings going on.
>>>
>>> Chucks post about how paris handles audio is a theory..Only Edmund can
>
>>> truly
>>> give us the goods on what's really what..
>>>
>>> I disagree that manufactuers don;t set out o put a sonic print on their
>
>>> products.
>>> I think they do.
>>>
>>> I have been fortunate to work on some digital mixers and I can tell you
>
>>> that
>>> each one has their own sound. The Sony Dmx-100 was modeled after SSL
4000g
>>> (like it's Big Brother).And you what? That board (Dmx-100) sound very
>warm
>>> and it's eq tries to behave and sound just like an SSL.. Unlike he Yamaha
>>> Dm2000(version 1.x) which has a very Clean, neutral sound..However, some
>>> complained that it was tooo Vanila and thus, Yamaha add a version 2.0
>
>>> which
>>> added Vintage type Eq's, modeled analog input gain saturation fx too
give
>>> the user a choice Btw Clean and Neutral vs sonic Character.
>>>
>>> So, if digital conoles can be given a sonic character, why not a software
>>> mixer?
>>> The truth is, there are some folks who want a neutral mixer and then
there
>>> are others who want a sonic footprint imparted. and these can be coded
>in
>>> the digital realm.
>>> The apllies with the manufactuers. They too have their vision on what
>They
>>> think and want their product to sound.
>>>
>>> I love reading on gearslutz the posts from Plugin developers and their
>
>>> interpretations
>>> and opinions about what makes their Neve 1073 Eq better and what goes
>into
>>> making their version sound like it does.. Each Developer has a different
>>> vision as to what the Neve 1073 should sound like. And yet they all sound
>>> good , but slightly different.
>>>
>>> You stated that you use Vegas. Well as you know, Vegas has a very generic
>>> sound..Just plain and simple. But, i bet you can tell the difference
>on
>>> your system when you play that same file in Neundo (No, fx, eq,
>>> null-edzerro)..
>>> ???
>>>
>>>
>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>Lamont - what is the output chain you are using for each app when
>>>>comparing
>>>
>>>>the file in Nuendo
>>>>vs ProTools? On the same PC, I presume (and is this PT HD or M-Powered?)?
>>>>Since these can't use the same output driver, you would have to depend
>on
>>>
>>>>the D/A being
>>>>the same, but clocking will be different unless you have a master clock,
>>> and
>>>>both interfaces
>>>>are locking with the same accuracy. This was one of the issues that
came
>>> up
>>>>at Lynn Fuston's
>>>>D/A converter shootout - when do you lock to external clock and incur
>the
>>>
>>>>resulting jitter,
>>>>and when do you trust the internal clock - and if you do lock externally,
>>>
>>>>how good is the PLL
>>>>in the slave device? These issues can cause audible changes in the top
>>> end
>>>>that have nothing to do
>>>>with the software itself. If you say that PTHD through the same converter
>>>
>>>>output as Nuendo (via? RME?
>>>>Lynx?) using the same master clock, sounds different playing a single
>
>>>>audio
>>>
>>>>file, then I take your word
>>>>for it. I can't tell you why that is happening - only that an audible
>>>>difference really shouldn't happen due
>>>>to the software alone - not with a single audio file, esp. since I've
>
>>>>heard
>>>
>>>>and seen PTHD audio cancel with
>>>>native DAWs. Just passing a single 16 or 24 bit track down the buss
>to
>>> the
>>>>output driver should
>>>>be, and usually is, completely transparent, bit for bit.
>>>>
>>>>The same audio file played through the same converters should only sound
>>>
>>>>different if something in
>>>>the chain is different - be it clocking, gain or some degree of
>>>>unintended,
>>>
>>>>errant dsp processing. Every DAW should
>>>>pass a single audio file without altering a single bit. That's a basic
>
>>>>level
>>>
>>>>of accuracy we should always
>>>>expect of any DAW. If that accuracy isn't there, you can be sure a heavy
>>>
>>>>mix will be altered in ways you
>>>>didn't intend, even though you would end up mixing with that factor in
>
>>>>place
>>>
>>>>(e.g. you still mix for what
>>>>you want to hear regardless of what the platform does to each audio track
>>> or
>>>>channel).
>>>>
>>>>In fact you should be able to send a stereo audio track out SPDIF or
>>>>lightpipe to another DAW, record it
>>>>bring the recorded file back in, line them up to the first bit, and have
>>>
>>>>them cancel on and inverted phase
>>>>test. I did this with Nuendo and Cubase 4 on separate machines just
to
>>> be
>>>>sure my master clocking and
>>>>slave sync was accurate - it worked perfectly.
>>>>
>>>>Also be sure there isn't a variation in the gain even by 0.1 dB between
>>> the
>>>>two. There shouldn't
>>>>and I wouldn't expect there to be one. Also could PT be set for a
>>>>different
>>>
>>>>pan law? Shouldn't make a
>>>>difference even if comparing two mono panned files to their stereo
>>>>interleaved equivalent, but for sake
>>>>of completeness it's worth checking as well. A variation in the output
>>>
>>>>chain, be it drivers, audio card
>>>>card, or converters would be the most likely culprit here.
>>>>
>>>>The reason DAW manufacturers wouldn't add any sonic "character"
>>>>intentionally is that the
>>>>ultimate goal from day one with recording has been to accurately reproduce
>>>
>>>>what we hear.
>>>>We developed a musical penchant for sonic character because the hardware
>>>
>>>>just wasn't accurate,
>>>>and what it did often sent us down new creative paths - even if by force
>>> -
>>>>and we decided it was
>>>>preferred that way.
>>>>
>>>>Your point about what goes into the feature presets to sell synths is
>
>>>>right
>>>
>>>>for sure, but synths are about
>>>>character and getting that "perfect piano" or crystal clear bell pad,
>or
>>> fat
>>>>punchy bass without spending
>>>>a mint on development, adding 50G onboard sample libraries, or costing
>
>>>>$15k,
>>>
>>>>so what they
>>>>lack in actual synthesis capabilities, they make up with EQ and effects
>>> on
>>>>the output. That's been the case
>>>>for years, at least since we had effects on synths at least. But even
>
>>>>with
>>>
>>>>modern synths such as the Fantom,
>>>>Tritons, etc, which are great synths all around, of course the coolest,
>>>
>>>>widest and biggest patches
>>>>will make the biggest impression - so in come the EQs, limiters, comps,
>>>
>>>>reverbs, chorus, etc. The best
>>>>way to find out if a synth is really good is to bypass all effects and
>see
>>>
>>>>what happens. Most are pretty
>>>>good these days, but about half the time, there are presets that fall
>>>>completely flat in fx bypass.
>>>>
>>>>DAWs aren't designed to put a sonic fingerprint on a sound the way synths
>>>
>>>>are - they are designed
>>>>to *not* add anything - to pass through what we create as users, with
>no
>>>
>>>>alteration (or as little as possible)
>>>>beyond what we add with intentional processing (EQ, comps, etc).
>>>>Developers
>>>
>>>>would find no pride
>>>>in hearing that their DAW sounds anything different than whatever is
being
>>>
>>>>played back in it,
>>>>and the concept is contrary to what AES and IEEE proceedings on the issue
>>>
>>>>propose in general
>>>>digital audio discussions, white papers, etc.
>>>>
>>>>What ID ended up doing with Paris (at least from what I gather per Chuck's
>>>
>>>>findings - so correct me if I'm missing part of the equation Chuck),
>>>>is drop the track gain by 20dB or so, then added it back at the master
>
>>>>buss
>>>
>>>>to create the effect of headroom (probably
>>>>because the master buss is really summing on the card, and they have
more
>>>
>>>>headroom there than on the tracks
>>>>where native plugins might be used). I don't know if Paris passed 32-bit
>>>
>>>>float files to the EDS card, but sort of
>>>>doubt it. I think Chuck has clarified this at one point, but don't recall
>>>
>>>>the answer.
>>>>
>>>>Also what Paris did is use a greater bit depth on the hardware than
>>>>ProTools
>>>
>>>>did - at the time PT was just
>>>>bring Mix+ systems to market, or they had been out for a year or two
(if
>>> I
>>>>have my timeline right) - they
>>>>were 24-bit fixed all the way through. Logic and Cubase were native
DAWs,
>>>
>>>>but native was still too slow
>>>>to compete with hardware hybrids. Paris trumped them all by running
>>>>32-bit
>>>
>>>>float natively (not new really, but
>>>>better than sticking to 24-bit) and 56 or so bits in hardware instead
>of
>>>
>>>>going to Motorola DSPs at 24.
>>>>The onboard effects were also a step up from anything out there, so the
>>> demo
>>>>did sound good.
>>>>I don't recall which, but one of the demos, imho, wasn't so good (some
>>>>sloppy production and
>>>>vocals in spots, IIRC), so I only listened to it once. ;-)
>>>>
>>>>Coupled with the gain drop and buss makeup, this all gave it a "headroom"
>>> no
>>>>one else had. With very nice
>>>>onboard effects, Paris jumped ahead of anything else out there easily,
>and
>>>
>>>>still respectably holds its' own today
>>>>in that department.
>>>>
>>>>Most demos I hear (when I listen to them) vary in quality, usually not
>so
>>>
>>>>great in some area. But if a demo does
>>>>sound great, then it at least says that the product is capable of at
>
>>>>least
>>>
>>>>that level of performance, and it can
>>>>only help improve a prospective buyer's impression of it.
>>>>
>>>>Regards,
>>>>Dedric
>>>>
>>>>"LaMont " <jjdpro@ameritech.net> wrote in message news:458c14c0$1@linux...
>>>>>
>>>>> Dedric good post..
>>>>>
>>>>> However, I have PT-M-Powered/M-audio 410 interface for my laptop and
>it
>>>
>>>>> has
>>>>> that same sound (no eq, zero fader) that HD does. I know their use
the
>>>
>>>>> same
>>>>> 48 bit fix mixer. I load up the same file in Nuendo (no eq, zero
>>>>> fader)..results.
>>>>> different sonic character.
>>>>>
>>>>> PT having a top end touch..Nuendo, nice smooth(flat) sound. And I'm
>just
>>>>> taking about a stereo wav file nulled with no eq..nothing
>>>>> ..zilch..nada..
>>>>>
>>>>> Now, there are devices (keyboards, dum machines) on the market today
>
>>>>> that
>>>>> have a Master Buss Compressor and EQ set to on with the top end notched
>>>
>>>>> up.
>>>>> Why? because it gives their product an competitive advantageover the
>>>>> competition..
>>>>> Ex: Yahama's Motif ES, Akai's MPC 1000, 2500, Roland's Fantom.
>>>>>
>>>>> So, why would'nt a DAW manufactuer code in an extra (ooommf) to make
>
>>>>> their
>>>>> DAW sound better. Especially, given the "I hate Digtal Summing" crowd?
>>>
>>>>> And,
>>>>> If I'm a DAW manufactuer, what would give my product a sonic edge over
>>> the
>>>>> competition?
>>>>>
>>>>> We live in the "louder is better" audio world these days, so a DAW
that
>>>
>>>>> can
>>>>> catch my attention 'sonically" will probaly will get the sell. That's
>>> what
>>>>> happend to me back in 1997 when I heard Paris. I was floored!!! Still
>>> to
>>>>> this day, nothing has floored me like that "Road House Blues Demo"
I
>
>>>>> heard
>>>>> on Paris.
>>>>>
>>>>> Was it the hardware ? was it the software. I remember talking with
>>>>> Edmund
>>>>> at the 2000 winter Namm, and told me that he & Steve set out to
>>>>> reproduce
>>>>> the sonics of big buck analog board (eq's) and all.. And, summing was
>>> a
>>>>> big
>>>>> big issue for them because they (ID) thought that nobody has gotten
>>>>> it(summing)
>>>>> right. And by right, they meant, behaved like a console with a wide
>lane
>>>>> for all of those tracks..
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>"LaMont" <jjdpro@ameritech.net> wrote in message
>>>>>>news:458be8d5$1@linux...
>>>>>>>
>>>>>>> Okay...
>>>>>>> I guess what I'm saying is this:
>>>>>>>
>>>>>>> -Is it possible that diferent DAW manufactuers "code" their app
>>>>>>> differently
>>>>>>> for sound results.
>>>>>>
>>>>>>Of course it is *possible* to do this, but only if the DAW has a
>>>>>>specific
>>>>>
>>>>>>sound shaping purpose
>>>>>>beyond normal summing/mixing. Users talk about wanting developers
to
>>> add
>>>>> a
>>>>>>"Neve sound" or "API sound" option to summing engines,
>>>>>>but that's really impractical given the amount of dsp required to make
>>> a
>>>>>
>>>>>>decent emulation (with convolution, dynamic EQ functions,
>>>>>>etc). For sake of not eating up all cpu processing, that could likely
>>>
>>>>>>only
>>>>>
>>>>>>surface as is a built in EQ, which
>>>>>>no one wants universally in summing, and anyone can add at will already.
>>>>>>
>>>>>>So it hasn't happened yet and isn't likely to as it detours from the
>
>>>>>>basic
>>>>>
>>>>>>tenant of audio recording - recreate what comes in as
>>>>>>accurately as possible.
>>>>>>
>>>>>>What Digi did in recoding their summing engine was try to recover some
>>>>>>of the damage done by the 24-bit buss in Mix systems. Motorola 56k
dsps
>>>>> are
>>>>>>24-bit fixed point chips and I think
>>>>>>the new generation (321?) still is, but they use double words now for
>>>>>>48-bits). And though plugins could process at 48-bit by
>>>>>>doubling up and using upper and lower 24-bit words for 48-bit outputs,
>>> the
>>>>>
>>>>>>buss
>>>>>>between chips was 24-bits, so they had to dither to 24-bits after every
>>>>>
>>>>>>plugin. The mixer (if I recall correctly) also
>>>>>>had a 24-bit buss, so what Digi did is to add a dither stage to the
>
>>>>>>mixer
>>>>> to
>>>>>>prevent this
>>>>>>constant truncation of data. 24-bits isn't enough to cover summing
>for
>>>>> more
>>>>>>than a few tracks without
>>>>>>losing information in the 16-bit world, and in the 24-bit world some
>>>>>>information will be lost, at least at the lowest levels.
>>>>>>
>>>>>>Adding a dither stage (though I think they did more than that - perhaps
>>>>>
>>>>>>implement a 48-bit double word stage as well),
>>>>>>simply smoothed over the truncation that was happening, but it didn't
>>>
>>>>>>solve
>>>>>
>>>>>>the problem, so with HD
>>>>>>they went to a double-word path - throughout I believe, including the
>>> path
>>>>>
>>>>>>between chips. I believe the chips
>>>>>>are still 24-bit, but by doubling up the processing (yes at a cost
of
>>>
>>>>>>twice
>>>>>
>>>>>>the overhead), they get a 48-bit engine.
>>>>>>This not only provided better headroom, but greater resolution. Higher
>>>>> bit
>>>>>>depths subdivide the amplitude with greater resolution, and that's
>>>>>>really where we get the definition of dynamic range - by lowering the
>>>
>>>>>>signal
>>>>>
>>>>>>to quantization noise ratio.
>>>>>>
>>>>>>With DAWs that use 32-bit floating point math all the way through,
the
>>>
>>>>>>only
>>>>>
>>>>>>reason for altering the summing
>>>>>>is by error, and that's an error that would actually be hard to make
>and
>>>>> get
>>>>>>past a very basic alpha stage of testing.
>>>>>>There is a small difference in fixed point math and floating point
math,
>>>>> or
>>>>>>at least a theoretical difference in how it affects audio
>>>>>>in certain cases, but not necessarily in the result for calculating
>gain
>>>>> in
>>>>>>either for the same audio file. Where any differences might show up
>is
>>>>>
>>>>>>complicated, and I believe only appear at levels below 24-bit (or in
>>>>>>headroom with tracks pushed beyond 0dBFS), or when/if
>>>>>>there areany differences in where each amplitude level is quantized.
>>>>>>
>>>>>>Obviously there can be differences if the DAW has to use varying bit
>>>>>>depths
>>>>>
>>>>>>throughout a single summing path to accomodate hardware
>>>>>>as well as software summing, since there may be truncation or rounding
>>>
>>>>>>along
>>>>>
>>>>>>the way, but that impacts the lowest bit
>>>>>>level, and hence - spacial reproduction, reverb tails perhaps, and
>>>>>>"depth",
>>>>>
>>>>>>not the levels most music so the differences are most
>>>>>>often more subtle than not. But most modern DAWs have eliminated those
>>>>>
>>>>>>"rough edges" in the math by increasing the bit depth to accomodate
>
>>>>>>normal
>>>>>
>>>>>>summing required for mixing audio.
>>>>>>
>>>>>>So with Lynn's unity gain summing test (A files on the CD I believe),
>>> DAWs
>>>>>
>>>>>>were never asked to sum beyond 24-bits,
>>>>>>at least not on the upper end of the dynamic range, so everything that
>>>
>>>>>>could
>>>>>
>>>>>>represent 24-bits accurately would cancel. The only ones
>>>>>>that didn't were ones that had a different bit depth and/or gain
>>>>>>structure
>>>>>
>>>>>>whether hybrid or native
>>>>>>(e.g. Paris' subtracting 20dB from tracks and adding it to the buss).
>>> In
>>>>>
>>>>>>this case, PTHD cancelled (when I tested it) with
>>>>>>Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed
>>> vs.
>>>>>
>>>>>>32-bit float wasn't a factor.
>>>>>>
>>>>>>When trying other tests, even when adding and subtracting gain, Nuendo,
>>>>>
>>>>>>Sequoia and Sonar cancel - both audibly and
>>>>>>visually at inaudible levels, which only proves that one isn't making
>>> an
>>>>>
>>>>>>error when calculating basic gain. Since a dB is well defined,
>>>>>>and the math to add gain is simple, they shouldn't. The fact that
they
>>>>> all
>>>>>>use 32-bit float all the way through eliminates a difference
>>>>>>in data structure as well, and this just verifies that. There was
a
>
>>>>>>time
>>>>>
>>>>>>that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor
went,
>>>>>>but it's 32-bit float all the way through now just as Sonar,
>>>>>>Nuendo/Cubase,
>>>>>
>>>>>>Samplitude/Sequoia, DP, Audition (I presume at least).
>>>>>>I don't know what Acid or Live use. Saw promotes a fixed point engine,
>>>>> but
>>>>>>I don't know if it is still 24-bit, or now 48 bit.
>>>>>>That was an intentional choice by the developer, but he's the only
one
>>> I
>>>>>
>>>>>>know of that stuck with 24-bit for summing
>>>>>>intentionally, esp. after the Digi Mix system mixer incident.
>>>>>>
>>>>>>Long answer, but to sum up, it is certainly physically *possible* for
>>> a
>>>>>
>>>>>>developer to code something differently intentionally, but not
>>>>>>in reality likely since it would be breaking some basic fixed point
>or
>>>>>>floating point math rules. Where the differences really
>>>>>>showed up in the past is with PT Mix systems where the limitation was
>>>
>>>>>>really
>>>>>
>>>>>>significant - e.g. 24 bit with truncation at several stages.
>>>>>>
>>>>>>That really isn't such an issue anymore. Given the differences in
>>>>>>workflow,
>>>>>
>>>>>>missing something in workflow or layout differences
>>>>>>is easy enough to do (e.g. Sonar doesn't have group and busses the
way
>>>>>>Nuendo does, as it's outputs are actually driver outputs,
>>>>>>not software busses, so in Sonar, busses are actually outputs, and
sub
>>>>>>busses are actually busses in Nuendo. There are no,
>>>>>>or at least I haven't found the equivalent of a Nuendo group in Sonar
>>> -
>>>>> that
>>>>>>affects the results of some tests (though not basic
>>>>>>summing) if not taken into account, but when taken into account, they
>>> work
>>>>>
>>>>>>exactly the same way).
>>>>>>
>>>>>>So at least when talking about apps with 32-bit float all the way
>>>>>>through,
>>>>>
>>>>>>it's safe to say (since it has been proven) that summing isn't different
>>>>>
>>>>>>unless
>>>>>>there is an error somewhere, or variation in how the user duplicates
>the
>>>>>
>>>>>>same mix in two different apps.
>>>>>>
>>>>>>Imho, that's actually a very good thing - approaching a more consistent
>>>>>
>>>>>>basis for recording and mixing from which users can make all
>>>>>>of the decisions as to how the final product will sound and not be
>>>>>>required
>>>>>
>>>>>>to decide when purchasing a pricey console, and have to
>>>>>>focus their business on clients who want "that sound". I believe we
>are
>>>>>
>>>>>>actually closer to the pure definition of recording now than
>>>>>>we once were.
>>>>>>
>>>>>>Regards,
>>>>>>Dedric
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> I the answer is yes, then,the real task is to discover or rather
>>>>>>> un-cover
>>>>>>> what's say: Motu's vision of summing, versus Digidesign, versus
>>>>>>> Steinberg
>>>>>>> and so on..
>>>>>>>
>>>>>>> What's under the hood. To me and others,when Digi re-coded their
>>>>>>> summing
>>>>>>> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
>>>>>
>>>>>>> bump.
>>>>>>> Where as Steinberg's summing is very neutral.
>>>>>>>
>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>Hi Neil,
>>>>>>>>
>>>>>>>>Jamie is right. And you aren't wacked out - you are thinking this
>>>>>>>>through
>>>>>>>
>>>>>>>>in a reasonable manner, but coming to the wrong
>>>>>>>>conclusion - easy to do given how confusing digital audio can be.
>
>>>>>>>>Each
>>>>>>> word
>>>>>>>>represents an amplitude
>>>>>>>>point on a single curve that is changing over time, and can vary
with
>>>>> a
>>>>>>>
>>>>>>>>speed up to the Nyquist frequency (as Jamie described).
>>>>>>>>The complex harmonic content we hear is actually the frequency
>>>>>>>>modulation
>>>>>>> of
>>>>>>>>a single waveform,
>>>>>>>>that over a small amount of time creates the sound we translate -
>we
>>>
>>>>>>>>don't
>>>>>>>
>>>>>>>>really hear a single sample at a time,
>>>>>>>>but thousands of samples at a time (1 sample alone could at most
>>>>>>>>represent
>>>>>>> a
>>>>>>>>single positive or negative peak
>>>>>>>>of a 22,050Hz waveform).
>>>>>>>>
>>>>>>>>If one bit doesn't cancel, esp. if it's a higher order bit than number
>>>>> 24,
>>>>>>>
>>>>>>>>you may hear, and will see that easily,
>>>>>>>>and the higher the bit in the dynamic range (higher order) the more
>>>>>>>>audible
>>>>>>>
>>>>>>>>the difference.
>>>>>>>>Since each bit is 6dB of dynamic range, you can extrapolate how "loud"
>>>>>
>>>>>>>>that
>>>>>>>
>>>>>>>>bit's impact will be
>>>>>>>>if there is a variation.
>>>>>>>>
>>>>>>>>Now, obviously if we are talking about 1 sample in a 44.1k rate song,
>>>>> then
>>>>>>>
>>>>>>>>it simply be a
>>>>>>>>click (only audible if it's a high enough order bit) instead of an
>>>>>>>>obvious
>>>>>>>
>>>>>>>>musical difference, but that should never
>>>>>>>>happen in a phase cancellation test between identical files higher
>
>>>>>>>>than
>>>>>>> bit
>>>>>>>>24, unless there are clock sync problems,
>>>>>>>>driver issues, or the DAW is an early alpha version. :-)
>>>>>>>>
>>>>>>>>By definition of what DAWs do during playback and record, every audio
>>>>>
>>>>>>>>stream
>>>>>>>
>>>>>>>>has the same point in time (judged by the timeline)
>>>>>>>>played back sample accurately, one word at a time, at whatever sample
>>>>>
>>>>>>>>rate
>>>>>>>
>>>>>>>>we are using. A phase cancellation test uses that
>>>>>>>>fact to compare two audio files word for word (and hence bit for
bit
>>>
>>>>>>>>since
>>>>>>>
>>>>>>>>each bit of a 24-bit word would
>>>>>>>>be at the same bit slot in each 24-bit word). Assuming they are
>>>>>>>>aligned
>>>>>>> to
>>>>>>>>the same start point, sample
>>>>>>>>accurately, and both are the same set of sample words at each sample
>>>>>>>>point,
>>>>>>>
>>>>>>>>bit for bit, and one is phase inverted,
>>>>>>>>they will cancel through all 24 bits. For two files to cancel
>>>>>>>>completely
>>>>>>>
>>>>>>>>for the duration of the file, each and every bit in each word
>>>>>>>>must be the exact opposite of that same bit position in a word at
>the
>>>>> same
>>>>>>>
>>>>>>>>sample point. This is why zooming in on an FFT
>>>>>>>>of the full difference file is valuable as it can show any differences
>>>>> in
>>>>>>>
>>>>>>>>the lower order bits that wouldn't be audible. So even if
>>>>>>>>there is no audible difference, the visual followup will show if
the
>>> two
>>>>>>>
>>>>>>>>files truly cancel even a levels below hearing, or
>>>>>>>>outside of a frequency change that we will perceive.
>>>>>>>>
>>>>>>>>When they don't cancel, usually there will be way more than 1 bit
>>>>>>>>difference - it's usually one or more bits in the words for
>>>>>>>>thousands of samples. From a musical standpoint this is usually
in
>>> a
>>>>>>>>frequency range (low freq, or high freq most often) - that will
>>>>>>>>show up as the difference between them, and that usually happens
due
>>> to
>>>>>>> some
>>>>>>>>form of processing difference between the files,
>>>>>>>>such as EQ, compression, frequency dependant gain changes, etc. That
>>> is
>>>>>>> what
>>>>>>>>I believe you are thinking through, but when
>>>>>>>>talking about straight summing with no gain change (or known equal
>
>>>>>>>>gain
>>>>>>>
>>>>>>>>changes), we are only looking at linear, one for one
>>>>>>>>comparisons between the two files' frequency representations.
>>>>>>>>
>>>>>>>>Regards,
>>>>>>>>Dedric
>>>>>>>>
>>>>>>>>> Neil wrote:
>>>>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>>>> The tests I did were completely blank down to -200 dB (far below
>>> the
>>>>>>>
>>>>>>>>>>> last
>>>>>>>>>>
>>>>>>>>>>> bit). It's safe to say there is no difference, even in
>>>>>>>>>>> quantization noise, which by technical rights, is considered
below
>>>>> the
>>>>>>>
>>>>>>>>>>> level
>>>>>>>>>>
>>>>>>>>>>> of "cancellation" in such tests.
>>>>>>>>>>
>>>>>>>>>> I'm not necessarily talking about just the first bit or the
>>>>>>>>>> last bit, but also everything in between... what happens on bit
>>>>>>>>>> #12, for example? Everything on bit #12 should be audible, but
>>>>>>>>>> in an a/b test what if thre are differences in what bits #8
>>>>>>>>>> through #12 sound like, but the amplutide is stll the same on
>>>>>>>>>> both files at that point, you'll get a null, right? Extrapolate
>>>>>>>>>> that out somewhat & let's say there are differences in bits #8
>>>>>>>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>>>>>>>> etc through 43,972... Now this is breaking things down well
>>>>>>>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>>>>>>>> dn't know of any way we could extract JUST that information
>>>>>>>>>> from each file & play it back for an a/b test; but would not
>>>>>>>>>> that be enough to have to "null-able" files that do actually
>>>>>>>>>> sound somewhat different?
>>>>>>>>>>
>>>>>>>>>> I guess what I'm saying is that since each sample in a musical
>>>>>>>>>> track or full song file doesn't represent a pure, simple set of
>>>>>>>>>> content like a sample of a sine wave would - there's a whole
>>>>>>>>>> world of harmonic structure in each sample of a song file, and
>>>>>>>>>> I think (although I'll admit - I can't "prove") that there is
>>>>>>>>>> plenty of room for some variables between the first bit & the
>>>>>>>>>> last bit while still allowing for a null test to be successful.
>>>>>>>>>>
>>>>>>>>>> No? Am I wacked out of my mind?
>>>>>>>>>>
>>>>>>>>>> Neil
>>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>>
>
|
|
|
Re: (No subject)...What's up inder the hood? [message #77369 is a reply to message #77360] |
Sat, 23 December 2006 14:10 |
LaMont
Messages: 828 Registered: October 2005
|
Senior Member |
|
|
But, Fredo explains that Steinberg's way of coding a 32 bit audio engine is
different than say Cakewalk..And explained the trade-offs and decisions that
are made to achieve what a developer thinks is good audio.
And, Why would I(if I were a DAW devloper) want my audio engine to sound
like my competitors? I would not..This is where the trade-off decisions come
from.
However, it was interesting to rad wen he stated that 'all whill be fixed
(aka: no trade-offs) when Seinberg goes native 64bit.
That says to me that they (Steinberg) knows that their 32bit audio engine
is not wide enough to handle loads of audio, with vstis, plugins, without
introducing or trading-off sound quality..Interesting.
"Dedric Terry" <dterry@keyofd.net> wrote:
>
>I was part of that thread (kdm) and did those tests - I actually took them
>a step further than Jake or Fredo. As you can see I incorrectly thought
>there was something in the group summing process, but it was just my boneheaded
>interpretation of output data (using a small sample section for FFT rather
>than the full file mainly). :-((
>
>What Fredo is talking about is when you go over 0dBFS what happens to the
>"over" data, and the references to truncation are in that case, which isn't
>normal for mixing. This is the same decision every native DAW developer
>has to make.
>
>We were actually discussing what happens when you sum to a group vs. summing
>to the main bus, without overs. I did my test with all files summing to
>-20dB, so there was no chance of pushing the upper limits of 32-bit float's
>truncation back down to 24-bits. And I actually simplified it by using
two
>copies of the same file (just as Fredo did), one phase inverted, both sample
>aligned. They cancelled to below 24 bits just as expected, and just as
they
>should. The variations below 24 bits that I saw (and thought were above
>24-bits at one point) are correlation of lower frequencies when gain and
>equivalent reduction are introduced (which is what Chuck stated that Paris
>does up front on every track). That really doesn't impact the audio itself
>since data below -136dB is quantization noise for 24-bit audio.
>
>Sonar, Nuendo, Cubase 4 and Sequoia all behaved exactly the same way in
this
>test - which tells me they are handling the LSB's the same way. When data
>is summed to groups, there will be quantization noise below -136dB. This
>is completely normal for any native DAW and they all are subject to it.
>As you might read in the thread my conclusion was that we proved digital
>audio theory exists - e.g. no uncharted territory, no digital audio frontiers,
>no bugs in Nuendo. yeeha. But that's what I get for second guessing talented
>developers. ;-)
>
>Fwiw, to take it a step further, Samplitude/Sequoia and Nuendo handle overs,
>or "into the red" identically. I checked that too a while back after the
>reports of extra headroom, etc in Samplitude. Believe me, I've tried hard
>to find where any differences might appear, not just noticeable differences,
>but any differences at the lowest levels, but it seems the major native
DAW
>players are making the same decisions when it comes to truncation, etc,
and
>there really aren't that many to make. In my tests, dither really wasn't
>an issue (I turned it off in all DAWs I tested just to test with pure truncation).
>
>Regards,
>Dedric
>
>"LaMOnt" <jjdpro@ameritech.net> wrote:
>>
>>Dedric, check out this post from our dear friend Fredo: Neundo Moderator:
>>Explaining how Steingberg's audio engine works. Note the trade-offs..Meaning,
>>Steinberg's way of coding an audio-engine 32bit float is different than
>say
>>Magix Samplitude:
>>
>>Fredo
>>Administrative Moderator
>>
>>
>>Joined: 29 Dec 2004
>>Posts: 4213
>>Location: Belgium
>> Posted: Fri Dec 08, 2006 2:33 pm Post subject:
>>
>> ------------------------------------------------------------ --------------------
>>
>>I think I see where the problem is.
>>In my scenario's I don't have any track that goes over 0dBfs, but I have
>>always lowered one channel to compensate with another.
>>So, I never whent over the 0dB fs limit.
>>
>>Here's the explanation:
>>
>>As soon as you go over 0dB, technically you are entering the domain of
distortion.
>>
>>In a 32bit FP mixer, that is not the case since there is unlimited headroom.
>>
>>
>>Now follow me step by step please - read this slow and make sure you understand
>>-
>>
>>At the end of each "stage", there is an adder (a big calculator) which
adds
>>all the numbers from the individual tracks that are routed to this "adder".
>>
>>The numbers are kept in the 80-bit registers and then brought back to 32bit
>>float.
>>This process of bringing back the numbers from 80-bit (and more) to 32bit
>>is kept to an absolute minimum.
>>This adding/bringing back to 32bit is done at 3 places: After a plugin
slot
>>(VST-specs for all plugin manufacturers) - Group Tracks and Master Tracks.
>>
>>
>>Now, as soon as you boost the volume above 0dB, you get more than 32bits.
>>Stay below 0dB and you will stay below 32 bits.
>>When the adders dump their results, the numbers are brought back from any
>>number of bits (say 60bit) to 32 bit float.
>>These numbers are simply truncated which results in distortion; that's
the
>>noise/residue you find way down low.
>>There is an algortithm that protects us from additive errors - so these
>errors
>>can never come into the audible range.
>>So, as soon as you go over 0dB, you will see these kind of artifacts.
>>
>>It is debatable if this needs to be dithered or not. The problem -still
>is-
>>that it is very difficult to dither in a Floating Point environment.
>>Fact remains that the error shouldn't be bigger than 2 to 3 LSB's.
>>
>>Is this a problem?
>>In real world applictations: NO.
>>In scientific -unrealistic- tests (forcing the erro ): YES.
>>
>>The alternative is having a Fixed point mixer, where you already would
be
>>in trouble as soon as you boost one channel over 0dBfs. (or merge two files
>>that are @ 0dB)
>>Also, this problem will be pretty much gone as soon as we switch to the
>64
>>bit engine.
>>
>>
>>For the record, the test where Jake hears "music" as residue must be flawed.
>>You should hear noise/distortion from square waves.
>>
>>HTH
>>
>>Fredo
>>
>>
>>
>>
>>
>>"Dedric Terry" <dedric@echomg.com> wrote:
>>>I can't tell you why you hear ProTools differently than Nuendo using a
>
>>>single file.
>>>There isn't any voodoo in the software, or hidden character enhancing
dsp.
>>
>>>I'll see if
>>>I can round up an M-Powered system to compare with next month.
>>>
>>>For reference, everytime I open Sequoia I think I might hear a broader,
>>
>>>clean,
>>>and almost flat (spectrum, not depth) sound, but I don't - it's the same
>>as
>>>Nuendo, fwiw.
>>>Also I don't think what I was referring to was a theory from Chuck -
I
>>
>>>believe that was what he
>>>discovered in the code.
>>>
>>>Digital mixers all have different preamps and converters. Unless you
are
>>
>>>bypassing every
>>>EQ and converter and going digital in and out to the same converter when
>>
>>>comparing, it would be hard
>>>to say the mix engine itself sounds different than another mixer, but
taken
>>
>>>as a whole, then
>>>certainly they may very well sound different. In addition, hardware digital
>>>mixers may use a variety of different paths between the I/O, channel
>>>processing, and summing,
>>>though most are pretty much software mixers on a single chip or set of
>dsps
>>
>>>similar to ProTools,
>>>with I/O and a hardware surface attached.
>>>
>>>I know it may be hard to separate the mix engine as software in either
>a
>>
>>>native DAW
>>>or a digital mixer, from the hardware that translates the audio to something
>>
>>>we hear,
>>>but that's what is required when comparing summing. The hardware can
>>>significantly change
>>>what we hear, so comparing digital mixers really isn't of as much interest
>>
>>>as comparing native
>>>DAWs in that respect - unless you are looking to buy one of course.
>>>
>>>Even though I know you think manufacturers are trying to add something
>to
>>
>>>give them an edge, I am 100%
>>>sure that isn't the case - rather they are trying to add or change as
little
>>
>>>as possible in order to give
>>>them the edge. Their end of digital audio isn't about recreating the
past,
>>
>>>but improving upon it.
>>>As we've discussed and agreed before, the obsession with recreating
>>>"vintage" technology is as much
>>>fad as it is a valuable creative asset. There is no reason we shouldn't
>>
>>>have far superior hardware and software EQs and comps
>>>than 20, 30 or 40 years ago. No reason at all, other than market demand,
>>
>>>but the majority of software, and new
>>>hardware gear on the market has a vintage marketing tagline with it.
>>>Companies will sell any bill of
>>>goods if customers will buy it.
>>>
>>>There's nothing unique about the summing in Nuendo, Cubase, Sequoia/Samp,
>>>or Sonar, and it's pretty safe to include Logic and DP in that list as
>well.
>>
>>>One of the reasons I test
>>>these things is to be sure my DAW isn't doing something wrong, or something
>>
>>>I don't know about.
>>>
>>>Vegas - I use it for video conversions and have never done any critical
>>
>>>listening tests with it. What I have heard
>>>briefly didn't sound any different. It certainly looks plain vanilla
>>>though. What you are describing is exactly
>>>what I would say about the GUIs of each of those apps, not that it means
>>
>>>anything. Just interesting.
>>>
>>>That's one reason I listen eyes closed and double check with phase
>>>cancellation tests and FFTs - I am
>>>influenced creatively by the GUI to some degree. I actually like Cubase
>>4's
>>>GUI better than Nuendo 3.2,
>>>though there are only slight visual differences (some workflow differences
>>
>>>are a definite improvement for me though).
>>>
>>>ProTools' GUI always made me want to write one dimensional soundtracks
>in
>>
>>>mono for public utilities, accounting offices
>>>or the IRS while reading my discreet systems analysis textbook - it was
>>also
>>>grey. ;-)
>>>
>>>Regards,
>>>Dedric
>>>
>>>"LaMont" <jjdpro@ameritech.net> wrote in message news:458c82fd$1@linux...
>>>>
>>>> Dedric, my simple test is simple..
>>>> Using the same audio interface, with the same stereo file..null-ed to
>>
>>>> zero..No
>>>> eq, for fx. Master fader on zero..
>>>>
>>>> Nuendo, Pro-Tools -Mpowered(native)... yields a sonic difference that
>>I
>>>> have
>>>> referenced before.. The sound coming from PT-M has a nice top end ,
where
>>>> as Neundo has a nice flatter sound quality.
>>>> Same audio interface. M-audio 410..Using Mackies & Blue-Sky pro monitors..
>>>>
>>>> Same test at the big room..PT-HD & Neundo Logic Audio(macG5-Dual) Using
>>
>>>> the
>>>> 192 interface.
>>>> Same results..But adding Logic audio's sound ..(Broad, thick)
>>>>
>>>> Somethings going on.
>>>>
>>>> Chucks post about how paris handles audio is a theory..Only Edmund can
>>
>>>> truly
>>>> give us the goods on what's really what..
>>>>
>>>> I disagree that manufactuers don;t set out o put a sonic print on their
>>
>>>> products.
>>>> I think they do.
>>>>
>>>> I have been fortunate to work on some digital mixers and I can tell
you
>>
>>>> that
>>>> each one has their own sound. The Sony Dmx-100 was modeled after SSL
>4000g
>>>> (like it's Big Brother).And you what? That board (Dmx-100) sound very
>>warm
>>>> and it's eq tries to behave and sound just like an SSL.. Unlike he Yamaha
>>>> Dm2000(version 1.x) which has a very Clean, neutral sound..However,
some
>>>> complained that it was tooo Vanila and thus, Yamaha add a version 2.0
>>
>>>> which
>>>> added Vintage type Eq's, modeled analog input gain saturation fx too
>give
>>>> the user a choice Btw Clean and Neutral vs sonic Character.
>>>>
>>>> So, if digital conoles can be given a sonic character, why not a software
>>>> mixer?
>>>> The truth is, there are some folks who want a neutral mixer and then
>there
>>>> are others who want a sonic footprint imparted. and these can be coded
>>in
>>>> the digital realm.
>>>> The apllies with the manufactuers. They too have their vision on what
>>They
>>>> think and want their product to sound.
>>>>
>>>> I love reading on gearslutz the posts from Plugin developers and their
>>
>>>> interpretations
>>>> and opinions about what makes their Neve 1073 Eq better and what goes
>>into
>>>> making their version sound like it does.. Each Developer has a different
>>>> vision as to what the Neve 1073 should sound like. And yet they all
sound
>>>> good , but slightly different.
>>>>
>>>> You stated that you use Vegas. Well as you know, Vegas has a very generic
>>>> sound..Just plain and simple. But, i bet you can tell the difference
>>on
>>>> your system when you play that same file in Neundo (No, fx, eq,
>>>> null-edzerro)..
>>>> ???
>>>>
>>>>
>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>Lamont - what is the output chain you are using for each app when
>>>>>comparing
>>>>
>>>>>the file in Nuendo
>>>>>vs ProTools? On the same PC, I presume (and is this PT HD or M-Powered?)?
>>>>>Since these can't use the same output driver, you would have to depend
>>on
>>>>
>>>>>the D/A being
>>>>>the same, but clocking will be different unless you have a master clock,
>>>> and
>>>>>both interfaces
>>>>>are locking with the same accuracy. This was one of the issues that
>came
>>>> up
>>>>>at Lynn Fuston's
>>>>>D/A converter shootout - when do you lock to external clock and incur
>>the
>>>>
>>>>>resulting jitter,
>>>>>and when do you trust the internal clock - and if you do lock externally,
>>>>
>>>>>how good is the PLL
>>>>>in the slave device? These issues can cause audible changes in the
top
>>>> end
>>>>>that have nothing to do
>>>>>with the software itself. If you say that PTHD through the same converter
>>>>
>>>>>output as Nuendo (via? RME?
>>>>>Lynx?) using the same master clock, sounds different playing a single
>>
>>>>>audio
>>>>
>>>>>file, then I take your word
>>>>>for it. I can't tell you why that is happening - only that an audible
>>>>>difference really shouldn't happen due
>>>>>to the software alone - not with a single audio file, esp. since I've
>>
>>>>>heard
>>>>
>>>>>and seen PTHD audio cancel with
>>>>>native DAWs. Just passing a single 16 or 24 bit track down the buss
>>to
>>>> the
>>>>>output driver should
>>>>>be, and usually is, completely transparent, bit for bit.
>>>>>
>>>>>The same audio file played through the same converters should only sound
>>>>
>>>>>different if something in
>>>>>the chain is different - be it clocking, gain or some degree of
>>>>>unintended,
>>>>
>>>>>errant dsp processing. Every DAW should
>>>>>pass a single audio file without altering a single bit. That's a basic
>>
>>>>>level
>>>>
>>>>>of accuracy we should always
>>>>>expect of any DAW. If that accuracy isn't there, you can be sure a
heavy
>>>>
>>>>>mix will be altered in ways you
>>>>>didn't intend, even though you would end up mixing with that factor
in
>>
>>>>>place
>>>>
>>>>>(e.g. you still mix for what
>>>>>you want to hear regardless of what the platform does to each audio
track
>>>> or
>>>>>channel).
>>>>>
>>>>>In fact you should be able to send a stereo audio track out SPDIF or
>>>>>lightpipe to another DAW, record it
>>>>>bring the recorded file back in, line them up to the first bit, and
have
>>>>
>>>>>them cancel on and inverted phase
>>>>>test. I did this with Nuendo and Cubase 4 on separate machines just
>to
>>>> be
>>>>>sure my master clocking and
>>>>>slave sync was accurate - it worked perfectly.
>>>>>
>>>>>Also be sure there isn't a variation in the gain even by 0.1 dB between
>>>> the
>>>>>two. There shouldn't
>>>>>and I wouldn't expect there to be one. Also could PT be set for a
>>>>>different
>>>>
>>>>>pan law? Shouldn't make a
>>>>>difference even if comparing two mono panned files to their stereo
>>>>>interleaved equivalent, but for sake
>>>>>of completeness it's worth checking as well. A variation in the output
>>>>
>>>>>chain, be it drivers, audio card
>>>>>card, or converters would be the most likely culprit here.
>>>>>
>>>>>The reason DAW manufacturers wouldn't add any sonic "character"
>>>>>intentionally is that the
>>>>>ultimate goal from day one with recording has been to accurately reproduce
>>>>
>>>>>what we hear.
>>>>>We developed a musical penchant for sonic character because the hardware
>>>>
>>>>>just wasn't accurate,
>>>>>and what it did often sent us down new creative paths - even if by force
>>>> -
>>>>>and we decided it was
>>>>>preferred that way.
>>>>>
>>>>>Your point about what goes into the feature presets to sell synths is
>>
>>>>>right
>>>>
>>>>>for sure, but synths are about
>>>>>character and getting that "perfect piano" or crystal clear bell pad,
>>or
>>>> fat
>>>>>punchy bass without spending
>>>>>a mint on development, adding 50G onboard sample libraries, or costing
>>
>>>>>$15k,
>>>>
>>>>>so what they
>>>>>lack in actual synthesis capabilities, they make up with EQ and effects
>>>> on
>>>>>the output. That's been the case
>>>>>for years, at least since we had effects on synths at least. But even
>>
>>>>>with
>>>>
>>>>>modern synths such as the Fantom,
>>>>>Tritons, etc, which are great synths all around, of course the coolest,
>>>>
>>>>>widest and biggest patches
>>>>>will make the biggest impression - so in come the EQs, limiters, comps,
>>>>
>>>>>reverbs, chorus, etc. The best
>>>>>way to find out if a synth is really good is to bypass all effects and
>>see
>>>>
>>>>>what happens. Most are pretty
>>>>>good these days, but about half the time, there are presets that fall
>>>>>completely flat in fx bypass.
>>>>>
>>>>>DAWs aren't designed to put a sonic fingerprint on a sound the way synths
>>>>
>>>>>are - they are designed
>>>>>to *not* add anything - to pass through what we create as users, with
>>no
>>>>
>>>>>alteration (or as little as possible)
>>>>>beyond what we add with intentional processing (EQ, comps, etc).
>>>>>Developers
>>>>
>>>>>would find no pride
>>>>>in hearing that their DAW sounds anything different than whatever is
>being
>>>>
>>>>>played back in it,
>>>>>and the concept is contrary to what AES and IEEE proceedings on the
issue
>>>>
>>>>>propose in general
>>>>>digital audio discussions, white papers, etc.
>>>>>
>>>>>What ID ended up doing with Paris (at least from what I gather per Chuck's
>>>>
>>>>>findings - so correct me if I'm missing part of the equation Chuck),
>>>>>is drop the track gain by 20dB or so, then added it back at the master
>>
>>>>>buss
>>>>
>>>>>to create the effect of headroom (probably
>>>>>because the master buss is really summing on the card, and they have
>more
>>>>
>>>>>headroom there than on the tracks
>>>>>where native plugins might be used). I don't know if Paris passed 32-bit
>>>>
>>>>>float files to the EDS card, but sort of
>>>>>doubt it. I think Chuck has clarified this at one point, but don't
recall
>>>>
>>>>>the answer.
>>>>>
>>>>>Also what Paris did is use a greater bit depth on the hardware than
>>>>>ProTools
>>>>
>>>>>did - at the time PT was just
>>>>>bring Mix+ systems to market, or they had been out for a year or two
>(if
>>>> I
>>>>>have my timeline right) - they
>>>>>were 24-bit fixed all the way through. Logic and Cubase were native
>DAWs,
>>>>
>>>>>but native was still too slow
>>>>>to compete with hardware hybrids. Paris trumped them all by running
>
>>>>>32-bit
>>>>
>>>>>float natively (not new really, but
>>>>>better than sticking to 24-bit) and 56 or so bits in hardware instead
>>of
>>>>
>>>>>going to Motorola DSPs at 24.
>>>>>The onboard effects were also a step up from anything out there, so
the
>>>> demo
>>>>>did sound good.
>>>>>I don't recall which, but one of the demos, imho, wasn't so good (some
>>>>>sloppy production and
>>>>>vocals in spots, IIRC), so I only listened to it once. ;-)
>>>>>
>>>>>Coupled with the gain drop and buss makeup, this all gave it a "headroom"
>>>> no
>>>>>one else had. With very nice
>>>>>onboard effects, Paris jumped ahead of anything else out there easily,
>>and
>>>>
>>>>>still respectably holds its' own today
>>>>>in that department.
>>>>>
>>>>>Most demos I hear (when I listen to them) vary in quality, usually not
>>so
>>>>
>>>>>great in some area. But if a demo does
>>>>>sound great, then it at least says that the product is capable of at
>>
>>>>>least
>>>>
>>>>>that level of performance, and it can
>>>>>only help improve a prospective buyer's impression of it.
>>>>>
>>>>>Regards,
>>>>>Dedric
>>>>>
>>>>>"LaMont " <jjdpro@ameritech.net> wrote in message news:458c14c0$1@linux...
>>>>>>
>>>>>> Dedric good post..
>>>>>>
>>>>>> However, I have PT-M-Powered/M-audio 410 interface for my laptop and
>>it
>>>>
>>>>>> has
>>>>>> that same sound (no eq, zero fader) that HD does. I know their use
>the
>>>>
>>>>>> same
>>>>>> 48 bit fix mixer. I load up the same file in Nuendo (no eq, zero
>>>>>> fader)..results.
>>>>>> different sonic character.
>>>>>>
>>>>>> PT having a top end touch..Nuendo, nice smooth(flat) sound. And I'm
>>just
>>>>>> taking about a stereo wav file nulled with no eq..nothing
>>>>>> ..zilch..nada..
>>>>>>
>>>>>> Now, there are devices (keyboards, dum machines) on the market today
>>
>>>>>> that
>>>>>> have a Master Buss Compressor and EQ set to on with the top end notched
>>>>
>>>>>> up.
>>>>>> Why? because it gives their product an competitive advantageover the
>>>>>> competition..
>>>>>> Ex: Yahama's Motif ES, Akai's MPC 1000, 2500, Roland's Fantom.
>>>>>>
>>>>>> So, why would'nt a DAW manufactuer code in an extra (ooommf) to make
>>
>>>>>> their
>>>>>> DAW sound better. Especially, given the "I hate Digtal Summing" crowd?
>>>>
>>>>>> And,
>>>>>> If I'm a DAW manufactuer, what would give my product a sonic edge
over
>>>> the
>>>>>> competition?
>>>>>>
>>>>>> We live in the "louder is better" audio world these days, so a DAW
>that
>>>>
>>>>>> can
>>>>>> catch my attention 'sonically" will probaly will get the sell. That's
>>>> what
>>>>>> happend to me back in 1997 when I heard Paris. I was floored!!! Still
>>>> to
>>>>>> this day, nothing has floored me like that "Road House Blues Demo"
>I
>>
>>>>>> heard
>>>>>> on Paris.
>>>>>>
>>>>>> Was it the hardware ? was it the software. I remember talking with
>
>>>>>> Edmund
>>>>>> at the 2000 winter Namm, and told me that he & Steve set out to
>>>>>> reproduce
>>>>>> the sonics of big buck analog board (eq's) and all.. And, summing
was
>>>> a
>>>>>> big
>>>>>> big issue for them because they (ID) thought that nobody has gotten
>>>>>> it(summing)
>>>>>> right. And by right, they meant, behaved like a console with a wide
>>lane
>>>>>> for all of those tracks..
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>"LaMont" <jjdpro@ameritech.net> wrote in message
>>>>>>>news:458be8d5$1@linux...
>>>>>>>>
>>>>>>>> Okay...
>>>>>>>> I guess what I'm saying is this:
>>>>>>>>
>>>>>>>> -Is it possible that diferent DAW manufactuers "code" their app
>>>>>>>> differently
>>>>>>>> for sound results.
>>>>>>>
>>>>>>>Of course it is *possible* to do this, but only if the DAW has a
>>>>>>>specific
>>>>>>
>>>>>>>sound shaping purpose
>>>>>>>beyond normal summing/mixing. Users talk about wanting developers
>to
>>>> add
>>>>>> a
>>>>>>>"Neve sound" or "API sound" option to summing engines,
>>>>>>>but that's really impractical given the amount of dsp required to
make
>>>> a
>>>>>>
>>>>>>>decent emulation (with convolution, dynamic EQ functions,
>>>>>>>etc). For sake of not eating up all cpu processing, that could likely
>>>>
>>>>>>>only
>>>>>>
>>>>>>>surface as is a built in EQ, which
>>>>>>>no one wants universally in summing, and anyone can add at will already.
>>>>>>>
>>>>>>>So it hasn't happened yet and isn't likely to as it detours from the
>>
>>>>>>>basic
>>>>>>
>>>>>>>tenant of audio recording - recreate what comes in as
>>>>>>>accurately as possible.
>>>>>>>
>>>>>>>What Digi did in recoding their summing engine was try to recover
some
>>>>>>>of the damage done by the 24-bit buss in Mix systems. Motorola 56k
>dsps
>>>>>> are
>>>>>>>24-bit fixed point chips and I think
>>>>>>>the new generation (321?) still is, but they use double words now
for
>>>>>>>48-bits). And though plugins could process at 48-bit by
>>>>>>>doubling up and using upper and lower 24-bit words for 48-bit outputs,
>>>> the
>>>>>>
>>>>>>>buss
>>>>>>>between chips was 24-bits, so they had to dither to 24-bits after
every
>>>>>>
>>>>>>>plugin. The mixer (if I recall correctly) also
>>>>>>>had a 24-bit buss, so what Digi did is to add a dither stage to the
>>
>>>>>>>mixer
>>>>>> to
>>>>>>>prevent this
>>>>>>>constant truncation of data. 24-bits isn't enough to cover summing
>>for
>>>>>> more
>>>>>>>than a few tracks without
>>>>>>>losing information in the 16-bit world, and in the 24-bit world some
>>>>>>>information will be lost, at least at the lowest levels.
>>>>>>>
>>>>>>>Adding a dither stage (though I think they did more than that - perhaps
>>>>>>
>>>>>>>implement a 48-bit double word stage as well),
>>>>>>>simply smoothed over the truncation that was happening, but it didn't
>>>>
>>>>>>>solve
>>>>>>
>>>>>>>the problem, so with HD
>>>>>>>they went to a double-word path - throughout I believe, including
the
>>>> path
>>>>>>
>>>>>>>between chips. I believe the chips
>>>>>>>are still 24-bit, but by doubling up the processing (yes at a cost
>of
>>>>
>>>>>>>twice
>>>>>>
>>>>>>>the overhead), they get a 48-bit engine.
>>>>>>>This not only provided better headroom, but greater resolution. Higher
>>>>>> bit
>>>>>>>depths subdivide the amplitude with greater resolution, and that's
>>>>>>>really where we get the definition of dynamic range - by lowering
the
>>>>
>>>>>>>signal
>>>>>>
>>>>>>>to quantization noise ratio.
>>>>>>>
>>>>>>>With DAWs that use 32-bit floating point math all the way through,
>the
>>>>
>>>>>>>only
>>>>>>
>>>>>>>reason for altering the summing
>>>>>>>is by error, and that's an error that would actually be hard to make
>>and
>>>>>> get
>>>>>>>past a very basic alpha stage of testing.
>>>>>>>There is a small difference in fixed point math and floating point
>math,
>>>>>> or
>>>>>>>at least a theoretical difference in how it affects audio
>>>>>>>in certain cases, but not necessarily in the result for calculating
>>gain
>>>>>> in
>>>>>>>either for the same audio file. Where any differences might show
up
>>is
>>>>>>
>>>>>>>complicated, and I believe only appear at levels below 24-bit (or
in
>>>>>>>headroom with tracks pushed beyond 0dBFS), or when/if
>>>>>>>there areany differences in where each amplitude level is quantized.
>>>>>>>
>>>>>>>Obviously there can be differences if the DAW has to use varying bit
>>>>>>>depths
>>>>>>
>>>>>>>throughout a single summing path to accomodate hardware
>>>>>>>as well as software summing, since there may be truncation or rounding
>>>>
>>>>>>>along
>>>>>>
>>>>>>>the way, but that impacts the lowest bit
>>>>>>>level, and hence - spacial reproduction, reverb tails perhaps, and
>>>>>>>"depth",
>>>>>>
>>>>>>>not the levels most music so the differences are most
>>>>>>>often more subtle than not. But most modern DAWs have eliminated
those
>>>>>>
>>>>>>>"rough edges" in the math by increasing the bit depth to accomodate
>>
>>>>>>>normal
>>>>>>
>>>>>>>summing required for mixing audio.
>>>>>>>
>>>>>>>So with Lynn's unity gain summing test (A files on the CD I believe),
>>>> DAWs
>>>>>>
>>>>>>>were never asked to sum beyond 24-bits,
>>>>>>>at least not on the upper end of the dynamic range, so everything
that
>>>>
>>>>>>>could
>>>>>>
>>>>>>>represent 24-bits accurately would cancel. The only ones
>>>>>>>that didn't were ones that had a different bit depth and/or gain
>>>>>>>structure
>>>>>>
>>>>>>>whether hybrid or native
>>>>>>>(e.g. Paris' subtracting 20dB from tracks and adding it to the buss).
>>>> In
>>>>>>
>>>>>>>this case, PTHD cancelled (when I tested it) with
>>>>>>>Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed
>>>> vs.
>>>>>>
>>>>>>>32-bit float wasn't a factor.
>>>>>>>
>>>>>>>When trying other tests, even when adding and subtracting gain, Nuendo,
>>>>>>
>>>>>>>Sequoia and Sonar cancel - both audibly and
>>>>>>>visually at inaudible levels, which only proves that one isn't making
>>>> an
>>>>>>
>>>>>>>error when calculating basic gain. Since a dB is well defined,
>>>>>>>and the math to add gain is simple, they shouldn't. The fact that
>they
>>>>>> all
>>>>>>>use 32-bit float all the way through eliminates a difference
>>>>>>>in data structure as well, and this just verifies that. There was
>a
>>
>>>>>>>time
>>>>>>
>>>>>>>that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor
>went,
>>>>>>>but it's 32-bit float all the way through now just as Sonar,
>>>>>>>Nuendo/Cubase,
>>>>>>
>>>>>>>Samplitude/Sequoia, DP, Audition (I presume at least).
>>>>>>>I don't know what Acid or Live use. Saw promotes a fixed point engine,
>>>>>> but
>>>>>>>I don't know if it is still 24-bit, or now 48 bit.
>>>>>>>That was an intentional choice by the developer, but he's the only
>one
>>>> I
>>>>>>
>>>>>>>know of that stuck with 24-bit for summing
>>>>>>>intentionally, esp. after the Digi Mix system mixer incident.
>>>>>>>
>>>>>>>Long answer, but to sum up, it is certainly physically *possible*
for
>>>> a
>>>>>>
>>>>>>>developer to code something differently intentionally, but not
>>>>>>>in reality likely since it would be breaking some basic fixed point
>>or
>>>>>>>floating point math rules. Where the differences really
>>>>>>>showed up in the past is with PT Mix systems where the limitation
was
>>>>
>>>>>>>really
>>>>>>
>>>>>>>significant - e.g. 24 bit with truncation at several stages.
>>>>>>>
>>>>>>>That really isn't such an issue anymore. Given the differences in
>>>>>>>workflow,
>>>>>>
>>>>>>>missing something in workflow or layout differences
>>>>>>>is easy enough to do (e.g. Sonar doesn't have group and busses the
>way
>>>>>>>Nuendo does, as it's outputs are actually driver outputs,
>>>>>>>not software busses, so in Sonar, busses are actually outputs, and
>sub
>>>>>>>busses are actually busses in Nuendo. There are no,
>>>>>>>or at least I haven't found the equivalent of a Nuendo group in Sonar
>>>> -
>>>>>> that
>>>>>>>affects the results of some tests (though not basic
>>>>>>>summing) if not taken into account, but when taken into account, they
>>>> work
>>>>>>
>>>>>>>exactly the same way).
>>>>>>>
>>>>>>>So at least when talking about apps with 32-bit float all the way
>>>>>>>through,
>>>>>>
>>>>>>>it's safe to say (since it has been proven) that summing isn't different
>>>>>>
>>>>>>>unless
>>>>>>>there is an error somewhere, or variation in how the user duplicates
>>the
>>>>>>
>>>>>>>same mix in two different apps.
>>>>>>>
>>>>>>>Imho, that's actually a very good thing - approaching a more consistent
>>>>>>
>>>>>>>basis for recording and mixing from which users can make all
>>>>>>>of the decisions as to how the final product will sound and not be
>>>>>>>required
>>>>>>
>>>>>>>to decide when purchasing a pricey console, and have to
>>>>>>>focus their business on clients who want "that sound". I believe
we
>>are
>>>>>>
>>>>>>>actually closer to the pure definition of recording now than
>>>>>>>we once were.
>>>>>>>
>>>>>>>Regards,
>>>>>>>Dedric
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> I the answer is yes, then,the real task is to discover or rather
>>>>>>>> un-cover
>>>>>>>> what's say: Motu's vision of summing, versus Digidesign, versus
>>>>>>>> Steinberg
>>>>>>>> and so on..
>>>>>>>>
>>>>>>>> What's under the hood. To me and others,when Digi re-coded their
>
>>>>>>>> summing
>>>>>>>> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
>>>>>>
>>>>>>>> bump.
>>>>>>>> Where as Steinberg's summing is very neutral.
>>>>>>>>
>>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>>Hi Neil,
>>>>>>>>>
>>>>>>>>>Jamie is right. And you aren't wacked out - you are thinking this
>>>>>>>>>through
>>>>>>>>
>>>>>>>>>in a reasonable manner, but coming to the wrong
>>>>>>>>>conclusion - easy to do given how confusing digital audio can be.
>>
>>>>>>>>>Each
>>>>>>>> word
>>>>>>>>>represents an amplitude
>>>>>>>>>point on a single curve that is changing over time, and can vary
>with
>>>>>> a
>>>>>>>>
>>>>>>>>>speed up to the Nyquist frequency (as Jamie described).
>>>>>>>>>The complex harmonic content we hear is actually the frequency
>>>>>>>>>modulation
>>>>>>>> of
>>>>>>>>>a single waveform,
>>>>>>>>>that over a small amount of time creates the sound we translate
-
>>we
>>>>
>>>>>>>>>don't
>>>>>>>>
>>>>>>>>>really hear a single sample at a time,
>>>>>>>>>but thousands of samples at a time (1 sample alone could at most
>>>>>>>>>represent
>>>>>>>> a
>>>>>>>>>single positive or negative peak
>>>>>>>>>of a 22,050Hz waveform).
>>>>>>>>>
>>>>>>>>>If one bit doesn't cancel, esp. if it's a higher order bit than
number
>>>>>> 24,
>>>>>>>>
>>>>>>>>>you may hear, and will see that easily,
>>>>>>>>>and the higher the bit in the dynamic range (higher order) the more
>>>>>>>>>audible
>>>>>>>>
>>>>>>>>>the difference.
>>>>>>>>>Since each bit is 6dB of dynamic range, you can extrapolate how
"loud"
>>>>>>
>>>>>>>>>that
>>>>>>>>
>>>>>>>>>bit's impact will be
>>>>>>>>>if there is a variation.
>>>>>>>>>
>>>>>>>>>Now, obviously if we are talking about 1 sample in a 44.1k rate
song,
>>>>>> then
>>>>>>>>
>>>>>>>>>it simply be a
>>>>>>>>>click (only audible if it's a high enough order bit) instead of
an
>>>>>>>>>obvious
>>>>>>>>
>>>>>>>>>musical difference, but that should never
>>>>>>>>>happen in a phase cancellation test between identical files higher
>>
>>>>>>>>>than
>>>>>>>> bit
>>>>>>>>>24, unless there are clock sync problems,
>>>>>>>>>driver issues, or the DAW is an early alpha version. :-)
>>>>>>>>>
>>>>>>>>>By definition of what DAWs do during playback and record, every
audio
>>>>>>
>>>>>>>>>stream
>>>>>>>>
>>>>>>>>>has the same point in time (judged by the timeline)
>>>>>>>>>played back sample accurately, one word at a time, at whatever
sample
>>>>>>
>>>>>>>>>rate
>>>>>>>>
>>>>>>>>>we are using. A phase cancellation test uses that
>>>>>>>>>fact to compare two audio files word for word (and hence bit for
>bit
>>>>
>>>>>>>>>since
>>>>>>>>
>>>>>>>>>each bit of a 24-bit word would
>>>>>>>>>be at the same bit slot in each 24-bit word). Assuming they are
>
>>>>>>>>>aligned
>>>>>>>> to
>>>>>>>>>the same start point, sample
>>>>>>>>>accurately, and both are the same set of sample words at each sample
>>>>>>>>>point,
>>>>>>>>
>>>>>>>>>bit for bit, and one is phase inverted,
>>>>>>>>>they will cancel through all 24 bits. For two files to cancel
>>>>>>>>>completely
>>>>>>>>
>>>>>>>>>for the duration of the file, each and every bit in each word
>>>>>>>>>must be the exact opposite of that same bit position in a word at
>>the
>>>>>> same
>>>>>>>>
>>>>>>>>>sample point. This is why zooming in on an FFT
>>>>>>>>>of the full difference file is valuable as it can show any differences
>>>>>> in
>>>>>>>>
>>>>>>>>>the lower order bits that wouldn't be audible. So even if
>>>>>>>>>there is no audible difference, the visual followup will show if
>the
>>>> two
>>>>>>>>
>>>>>>>>>files truly cancel even a levels below hearing, or
>>>>>>>>>outside of a frequency change that we will perceive.
>>>>>>>>>
>>>>>>>>>When they don't cancel, usually there will be way more than 1 bit
>>>>>>>>>difference - it's usually one or more bits in the words for
>>>>>>>>>thousands of samples. From a musical standpoint this is usually
>in
>>>> a
>>>>>>>>>frequency range (low freq, or high freq most often) - that will
>>>>>>>>>show up as the difference between them, and that usually happens
>due
>>>> to
>>>>>>>> some
>>>>>>>>>form of processing difference between the files,
>>>>>>>>>such as EQ, compression, frequency dependant gain changes, etc.
That
>>>> is
>>>>>>>> what
>>>>>>>>>I believe you are thinking through, but when
>>>>>>>>>talking about straight summing with no gain change (or known equal
>>
>>>>>>>>>gain
>>>>>>>>
>>>>>>>>>changes), we are only looking at linear, one for one
>>>>>>>>>comparisons between the two files' frequency representations.
>>>>>>>>>
>>>>>>>>>Regards,
>>>>>>>>>Dedric
>>>>>>>>>
>>>>>>>>>> Neil wrote:
>>>>>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>>>>> The tests I did were completely blank down to -200 dB (far below
>>>> the
>>>>>>>>
>>>>>>>>>>>> last
>>>>>>>>>>>
>>>>>>>>>>>> bit). It's safe to say there is no difference, even in
>>>>>>>>>>>> quantization noise, which by technical rights, is considered
>below
>>>>>> the
>>>>>>>>
>>>>>>>>>>>> level
>>>>>>>>>>>
>>>>>>>>>>>> of "cancellation" in such tests.
>>>>>>>>>>>
>>>>>>>>>>> I'm not necessarily talking about just the first bit or the
>>>>>>>>>>> last bit, but also everything in between... what happens on bit
>>>>>>>>>>> #12, for example? Everything on bit #12 should be audible, but
>>>>>>>>>>> in an a/b test what if thre are differences in what bits #8
>>>>>>>>>>> through #12 sound like, but the amplutide is stll the same on
>>>>>>>>>>> both files at that point, you'll get a null, right? Extrapolate
>>>>>>>>>>> that out somewhat & let's say there are differences in bits #8
>>>>>>>>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>>>>>>>>> etc through 43,972... Now this is breaking things down well
>>>>>>>>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>>>>>>>>> dn't know of any way we could extract JUST that information
>>>>>>>>>>> from each file & play it back for an a/b test; but would not
>>>>>>>>>>> that be enough to have to "null-able" files that do actually
>>>>>>>>>>> sound somewhat different?
>>>>>>>>>>>
>>>>>>>>>>> I guess what I'm saying is that since each sample in a musical
>>>>>>>>>>> track or full song file doesn't represent a pure, simple set
of
>>>>>>>>>>> content like a sample of a sine wave would - there's a whole
>>>>>>>>>>> world of harmonic structure in each sample of a song file, and
>>>>>>>>>>> I think (although I'll admit - I can't "prove") that there is
>>>>>>>>>>> plenty of room for some variables between the first bit & the
>>>>>>>>>>> last bit while still allowing for a null test to be successful.
>>>>>>>>>>>
>>>>>>>>>>> No? Am I wacked out of my mind?
>>>>>>>>>>>
>>>>>>>>>>> Neil
>>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
|
|
|
Re: (No subject)...What's up inder the hood? [message #77370 is a reply to message #77352] |
Sat, 23 December 2006 14:21 |
LaMont
Messages: 828 Registered: October 2005
|
Senior Member |
|
|
Hey Chuck, where can we find those guys..They should be coveted by evry DAW
company on the planet! :)
"chuck duffy" <c@c.com> wrote:
>
>Hi Lamont,
>
>I've posted this several times in the past, but here's the scoop. Edmund
>did not write the summing code. It's deep within the DSP code running on
>the ESP2 chips. It was written by some very talented guys at Ensoniq. I
>really dig everything that Edmund and Stephen did, but the summing just
isn't
>part of it.
>
>The stuff I posted is not really a theory. The PARIS mix engine source
code
>is freely available for download. Anyone with a little time, patience and
>the ESP2 patent can clearly see what is going on. It's only a couple hundred
>lines of code.
>
>Chuck
>
>"Dedric Terry" <dedric@echomg.com> wrote:
>>I can't tell you why you hear ProTools differently than Nuendo using a
>>single file.
>>There isn't any voodoo in the software, or hidden character enhancing dsp.
>
>>I'll see if
>>I can round up an M-Powered system to compare with next month.
>>
>>For reference, everytime I open Sequoia I think I might hear a broader,
>
>>clean,
>>and almost flat (spectrum, not depth) sound, but I don't - it's the same
>as
>>Nuendo, fwiw.
>>Also I don't think what I was referring to was a theory from Chuck - I
>
>>believe that was what he
>>discovered in the code.
>>
>>Digital mixers all have different preamps and converters. Unless you are
>
>>bypassing every
>>EQ and converter and going digital in and out to the same converter when
>
>>comparing, it would be hard
>>to say the mix engine itself sounds different than another mixer, but taken
>
>>as a whole, then
>>certainly they may very well sound different. In addition, hardware digital
>>mixers may use a variety of different paths between the I/O, channel
>>processing, and summing,
>>though most are pretty much software mixers on a single chip or set of
dsps
>
>>similar to ProTools,
>>with I/O and a hardware surface attached.
>>
>>I know it may be hard to separate the mix engine as software in either
a
>
>>native DAW
>>or a digital mixer, from the hardware that translates the audio to something
>
>>we hear,
>>but that's what is required when comparing summing. The hardware can
>>significantly change
>>what we hear, so comparing digital mixers really isn't of as much interest
>
>>as comparing native
>>DAWs in that respect - unless you are looking to buy one of course.
>>
>>Even though I know you think manufacturers are trying to add something
to
>
>>give them an edge, I am 100%
>>sure that isn't the case - rather they are trying to add or change as little
>
>>as possible in order to give
>>them the edge. Their end of digital audio isn't about recreating the past,
>
>>but improving upon it.
>>As we've discussed and agreed before, the obsession with recreating
>>"vintage" technology is as much
>>fad as it is a valuable creative asset. There is no reason we shouldn't
>
>>have far superior hardware and software EQs and comps
>>than 20, 30 or 40 years ago. No reason at all, other than market demand,
>
>>but the majority of software, and new
>>hardware gear on the market has a vintage marketing tagline with it.
>>Companies will sell any bill of
>>goods if customers will buy it.
>>
>>There's nothing unique about the summing in Nuendo, Cubase, Sequoia/Samp,
>>or Sonar, and it's pretty safe to include Logic and DP in that list as
well.
>
>>One of the reasons I test
>>these things is to be sure my DAW isn't doing something wrong, or something
>
>>I don't know about.
>>
>>Vegas - I use it for video conversions and have never done any critical
>
>>listening tests with it. What I have heard
>>briefly didn't sound any different. It certainly looks plain vanilla
>>though. What you are describing is exactly
>>what I would say about the GUIs of each of those apps, not that it means
>
>>anything. Just interesting.
>>
>>That's one reason I listen eyes closed and double check with phase
>>cancellation tests and FFTs - I am
>>influenced creatively by the GUI to some degree. I actually like Cubase
>4's
>>GUI better than Nuendo 3.2,
>>though there are only slight visual differences (some workflow differences
>
>>are a definite improvement for me though).
>>
>>ProTools' GUI always made me want to write one dimensional soundtracks
in
>
>>mono for public utilities, accounting offices
>>or the IRS while reading my discreet systems analysis textbook - it was
>also
>>grey. ;-)
>>
>>Regards,
>>Dedric
>>
>>"LaMont" <jjdpro@ameritech.net> wrote in message news:458c82fd$1@linux...
>>>
>>> Dedric, my simple test is simple..
>>> Using the same audio interface, with the same stereo file..null-ed to
>
>>> zero..No
>>> eq, for fx. Master fader on zero..
>>>
>>> Nuendo, Pro-Tools -Mpowered(native)... yields a sonic difference that
>I
>>> have
>>> referenced before.. The sound coming from PT-M has a nice top end , where
>>> as Neundo has a nice flatter sound quality.
>>> Same audio interface. M-audio 410..Using Mackies & Blue-Sky pro monitors..
>>>
>>> Same test at the big room..PT-HD & Neundo Logic Audio(macG5-Dual) Using
>
>>> the
>>> 192 interface.
>>> Same results..But adding Logic audio's sound ..(Broad, thick)
>>>
>>> Somethings going on.
>>>
>>> Chucks post about how paris handles audio is a theory..Only Edmund can
>
>>> truly
>>> give us the goods on what's really what..
>>>
>>> I disagree that manufactuers don;t set out o put a sonic print on their
>
>>> products.
>>> I think they do.
>>>
>>> I have been fortunate to work on some digital mixers and I can tell you
>
>>> that
>>> each one has their own sound. The Sony Dmx-100 was modeled after SSL
4000g
>>> (like it's Big Brother).And you what? That board (Dmx-100) sound very
>warm
>>> and it's eq tries to behave and sound just like an SSL.. Unlike he Yamaha
>>> Dm2000(version 1.x) which has a very Clean, neutral sound..However, some
>>> complained that it was tooo Vanila and thus, Yamaha add a version 2.0
>
>>> which
>>> added Vintage type Eq's, modeled analog input gain saturation fx too
give
>>> the user a choice Btw Clean and Neutral vs sonic Character.
>>>
>>> So, if digital conoles can be given a sonic character, why not a software
>>> mixer?
>>> The truth is, there are some folks who want a neutral mixer and then
there
>>> are others who want a sonic footprint imparted. and these can be coded
>in
>>> the digital realm.
>>> The apllies with the manufactuers. They too have their vision on what
>They
>>> think and want their product to sound.
>>>
>>> I love reading on gearslutz the posts from Plugin developers and their
>
>>> interpretations
>>> and opinions about what makes their Neve 1073 Eq better and what goes
>into
>>> making their version sound like it does.. Each Developer has a different
>>> vision as to what the Neve 1073 should sound like. And yet they all sound
>>> good , but slightly different.
>>>
>>> You stated that you use Vegas. Well as you know, Vegas has a very generic
>>> sound..Just plain and simple. But, i bet you can tell the difference
>on
>>> your system when you play that same file in Neundo (No, fx, eq,
>>> null-edzerro)..
>>> ???
>>>
>>>
>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>Lamont - what is the output chain you are using for each app when
>>>>comparing
>>>
>>>>the file in Nuendo
>>>>vs ProTools? On the same PC, I presume (and is this PT HD or M-Powered?)?
>>>>Since these can't use the same output driver, you would have to depend
>on
>>>
>>>>the D/A being
>>>>the same, but clocking will be different unless you have a master clock,
>>> and
>>>>both interfaces
>>>>are locking with the same accuracy. This was one of the issues that
came
>>> up
>>>>at Lynn Fuston's
>>>>D/A converter shootout - when do you lock to external clock and incur
>the
>>>
>>>>resulting jitter,
>>>>and when do you trust the internal clock - and if you do lock externally,
>>>
>>>>how good is the PLL
>>>>in the slave device? These issues can cause audible changes in the top
>>> end
>>>>that have nothing to do
>>>>with the software itself. If you say that PTHD through the same converter
>>>
>>>>output as Nuendo (via? RME?
>>>>Lynx?) using the same master clock, sounds different playing a single
>
>>>>audio
>>>
>>>>file, then I take your word
>>>>for it. I can't tell you why that is happening - only that an audible
>>>>difference really shouldn't happen due
>>>>to the software alone - not with a single audio file, esp. since I've
>
>>>>heard
>>>
>>>>and seen PTHD audio cancel with
>>>>native DAWs. Just passing a single 16 or 24 bit track down the buss
>to
>>> the
>>>>output driver should
>>>>be, and usually is, completely transparent, bit for bit.
>>>>
>>>>The same audio file played through the same converters should only sound
>>>
>>>>different if something in
>>>>the chain is different - be it clocking, gain or some degree of
>>>>unintended,
>>>
>>>>errant dsp processing. Every DAW should
>>>>pass a single audio file without altering a single bit. That's a basic
>
>>>>level
>>>
>>>>of accuracy we should always
>>>>expect of any DAW. If that accuracy isn't there, you can be sure a heavy
>>>
>>>>mix will be altered in ways you
>>>>didn't intend, even though you would end up mixing with that factor in
>
>>>>place
>>>
>>>>(e.g. you still mix for what
>>>>you want to hear regardless of what the platform does to each audio track
>>> or
>>>>channel).
>>>>
>>>>In fact you should be able to send a stereo audio track out SPDIF or
>>>>lightpipe to another DAW, record it
>>>>bring the recorded file back in, line them up to the first bit, and have
>>>
>>>>them cancel on and inverted phase
>>>>test. I did this with Nuendo and Cubase 4 on separate machines just
to
>>> be
>>>>sure my master clocking and
>>>>slave sync was accurate - it worked perfectly.
>>>>
>>>>Also be sure there isn't a variation in the gain even by 0.1 dB between
>>> the
>>>>two. There shouldn't
>>>>and I wouldn't expect there to be one. Also could PT be set for a
>>>>different
>>>
>>>>pan law? Shouldn't make a
>>>>difference even if comparing two mono panned files to their stereo
>>>>interleaved equivalent, but for sake
>>>>of completeness it's worth checking as well. A variation in the output
>>>
>>>>chain, be it drivers, audio card
>>>>card, or converters would be the most likely culprit here.
>>>>
>>>>The reason DAW manufacturers wouldn't add any sonic "character"
>>>>intentionally is that the
>>>>ultimate goal from day one with recording has been to accurately reproduce
>>>
>>>>what we hear.
>>>>We developed a musical penchant for sonic character because the hardware
>>>
>>>>just wasn't accurate,
>>>>and what it did often sent us down new creative paths - even if by force
>>> -
>>>>and we decided it was
>>>>preferred that way.
>>>>
>>>>Your point about what goes into the feature presets to sell synths is
>
>>>>right
>>>
>>>>for sure, but synths are about
>>>>character and getting that "perfect piano" or crystal clear bell pad,
>or
>>> fat
>>>>punchy bass without spending
>>>>a mint on development, adding 50G onboard sample libraries, or costing
>
>>>>$15k,
>>>
>>>>so what they
>>>>lack in actual synthesis capabilities, they make up with EQ and effects
>>> on
>>>>the output. That's been the case
>>>>for years, at least since we had effects on synths at least. But even
>
>>>>with
>>>
>>>>modern synths such as the Fantom,
>>>>Tritons, etc, which are great synths all around, of course the coolest,
>>>
>>>>widest and biggest patches
>>>>will make the biggest impression - so in come the EQs, limiters, comps,
>>>
>>>>reverbs, chorus, etc. The best
>>>>way to find out if a synth is really good is to bypass all effects and
>see
>>>
>>>>what happens. Most are pretty
>>>>good these days, but about half the time, there are presets that fall
>>>>completely flat in fx bypass.
>>>>
>>>>DAWs aren't designed to put a sonic fingerprint on a sound the way synths
>>>
>>>>are - they are designed
>>>>to *not* add anything - to pass through what we create as users, with
>no
>>>
>>>>alteration (or as little as possible)
>>>>beyond what we add with intentional processing (EQ, comps, etc).
>>>>Developers
>>>
>>>>would find no pride
>>>>in hearing that their DAW sounds anything different than whatever is
being
>>>
>>>>played back in it,
>>>>and the concept is contrary to what AES and IEEE proceedings on the issue
>>>
>>>>propose in general
>>>>digital audio discussions, white papers, etc.
>>>>
>>>>What ID ended up doing with Paris (at least from what I gather per Chuck's
>>>
>>>>findings - so correct me if I'm missing part of the equation Chuck),
>>>>is drop the track gain by 20dB or so, then added it back at the master
>
>>>>buss
>>>
>>>>to create the effect of headroom (probably
>>>>because the master buss is really summing on the card, and they have
more
>>>
>>>>headroom there than on the tracks
>>>>where native plugins might be used). I don't know if Paris passed 32-bit
>>>
>>>>float files to the EDS card, but sort of
>>>>doubt it. I think Chuck has clarified this at one point, but don't recall
>>>
>>>>the answer.
>>>>
>>>>Also what Paris did is use a greater bit depth on the hardware than
>>>>ProTools
>>>
>>>>did - at the time PT was just
>>>>bring Mix+ systems to market, or they had been out for a year or two
(if
>>> I
>>>>have my timeline right) - they
>>>>were 24-bit fixed all the way through. Logic and Cubase were native
DAWs,
>>>
>>>>but native was still too slow
>>>>to compete with hardware hybrids. Paris trumped them all by running
>>>>32-bit
>>>
>>>>float natively (not new really, but
>>>>better than sticking to 24-bit) and 56 or so bits in hardware instead
>of
>>>
>>>>going to Motorola DSPs at 24.
>>>>The onboard effects were also a step up from anything out there, so the
>>> demo
>>>>did sound good.
>>>>I don't recall which, but one of the demos, imho, wasn't so good (some
>>>>sloppy production and
>>>>vocals in spots, IIRC), so I only listened to it once. ;-)
>>>>
>>>>Coupled with the gain drop and buss makeup, this all gave it a "headroom"
>>> no
>>>>one else had. With very nice
>>>>onboard effects, Paris jumped ahead of anything else out there easily,
>and
>>>
>>>>still respectably holds its' own today
>>>>in that department.
>>>>
>>>>Most demos I hear (when I listen to them) vary in quality, usually not
>so
>>>
>>>>great in some area. But if a demo does
>>>>sound great, then it at least says that the product is capable of at
>
>>>>least
>>>
>>>>that level of performance, and it can
>>>>only help improve a prospective buyer's impression of it.
>>>>
>>>>Regards,
>>>>Dedric
>>>>
>>>>"LaMont " <jjdpro@ameritech.net> wrote in message news:458c14c0$1@linux...
>>>>>
>>>>> Dedric good post..
>>>>>
>>>>> However, I have PT-M-Powered/M-audio 410 interface for my laptop and
>it
>>>
>>>>> has
>>>>> that same sound (no eq, zero fader) that HD does. I know their use
the
>>>
>>>>> same
>>>>> 48 bit fix mixer. I load up the same file in Nuendo (no eq, zero
>>>>> fader)..results.
>>>>> different sonic character.
>>>>>
>>>>> PT having a top end touch..Nuendo, nice smooth(flat) sound. And I'm
>just
>>>>> taking about a stereo wav file nulled with no eq..nothing
>>>>> ..zilch..nada..
>>>>>
>>>>> Now, there are devices (keyboards, dum machines) on the market today
>
>>>>> that
>>>>> have a Master Buss Compressor and EQ set to on with the top end notched
>>>
>>>>> up.
>>>>> Why? because it gives their product an competitive advantageover the
>>>>> competition..
>>>>> Ex: Yahama's Motif ES, Akai's MPC 1000, 2500, Roland's Fantom.
>>>>>
>>>>> So, why would'nt a DAW manufactuer code in an extra (ooommf) to make
>
>>>>> their
>>>>> DAW sound better. Especially, given the "I hate Digtal Summing" crowd?
>>>
>>>>> And,
>>>>> If I'm a DAW manufactuer, what would give my product a sonic edge over
>>> the
>>>>> competition?
>>>>>
>>>>> We live in the "louder is better" audio world these days, so a DAW
that
>>>
>>>>> can
>>>>> catch my attention 'sonically" will probaly will get the sell. That's
>>> what
>>>>> happend to me back in 1997 when I heard Paris. I was floored!!! Still
>>> to
>>>>> this day, nothing has floored me like that "Road House Blues Demo"
I
>
>>>>> heard
>>>>> on Paris.
>>>>>
>>>>> Was it the hardware ? was it the software. I remember talking with
>>>>> Edmund
>>>>> at the 2000 winter Namm, and told me that he & Steve set out to
>>>>> reproduce
>>>>> the sonics of big buck analog board (eq's) and all.. And, summing was
>>> a
>>>>> big
>>>>> big issue for them because they (ID) thought that nobody has gotten
>>>>> it(summing)
>>>>> right. And by right, they meant, behaved like a console with a wide
>lane
>>>>> for all of those tracks..
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>"LaMont" <jjdpro@ameritech.net> wrote in message
>>>>>>news:458be8d5$1@linux...
>>>>>>>
>>>>>>> Okay...
>>>>>>> I guess what I'm saying is this:
>>>>>>>
>>>>>>> -Is it possible that diferent DAW manufactuers "code" their app
>>>>>>> differently
>>>>>>> for sound results.
>>>>>>
>>>>>>Of course it is *possible* to do this, but only if the DAW has a
>>>>>>specific
>>>>>
>>>>>>sound shaping purpose
>>>>>>beyond normal summing/mixing. Users talk about wanting developers
to
>>> add
>>>>> a
>>>>>>"Neve sound" or "API sound" option to summing engines,
>>>>>>but that's really impractical given the amount of dsp required to make
>>> a
>>>>>
>>>>>>decent emulation (with convolution, dynamic EQ functions,
>>>>>>etc). For sake of not eating up all cpu processing, that could likely
>>>
>>>>>>only
>>>>>
>>>>>>surface as is a built in EQ, which
>>>>>>no one wants universally in summing, and anyone can add at will already.
>>>>>>
>>>>>>So it hasn't happened yet and isn't likely to as it detours from the
>
>>>>>>basic
>>>>>
>>>>>>tenant of audio recording - recreate what comes in as
>>>>>>accurately as possible.
>>>>>>
>>>>>>What Digi did in recoding their summing engine was try to recover some
>>>>>>of the damage done by the 24-bit buss in Mix systems. Motorola 56k
dsps
>>>>> are
>>>>>>24-bit fixed point chips and I think
>>>>>>the new generation (321?) still is, but they use double words now for
>>>>>>48-bits). And though plugins could process at 48-bit by
>>>>>>doubling up and using upper and lower 24-bit words for 48-bit outputs,
>>> the
>>>>>
>>>>>>buss
>>>>>>between chips was 24-bits, so they had to dither to 24-bits after every
>>>>>
>>>>>>plugin. The mixer (if I recall correctly) also
>>>>>>had a 24-bit buss, so what Digi did is to add a dither stage to the
>
>>>>>>mixer
>>>>> to
>>>>>>prevent this
>>>>>>constant truncation of data. 24-bits isn't enough to cover summing
>for
>>>>> more
>>>>>>than a few tracks without
>>>>>>losing information in the 16-bit world, and in the 24-bit world some
>>>>>>information will be lost, at least at the lowest levels.
>>>>>>
>>>>>>Adding a dither stage (though I think they did more than that - perhaps
>>>>>
>>>>>>implement a 48-bit double word stage as well),
>>>>>>simply smoothed over the truncation that was happening, but it didn't
>>>
>>>>>>solve
>>>>>
>>>>>>the problem, so with HD
>>>>>>they went to a double-word path - throughout I believe, including the
>>> path
>>>>>
>>>>>>between chips. I believe the chips
>>>>>>are still 24-bit, but by doubling up the processing (yes at a cost
of
>>>
>>>>>>twice
>>>>>
>>>>>>the overhead), they get a 48-bit engine.
>>>>>>This not only provided better headroom, but greater resolution. Higher
>>>>> bit
>>>>>>depths subdivide the amplitude with greater resolution, and that's
>>>>>>really where we get the definition of dynamic range - by lowering the
>>>
>>>>>>signal
>>>>>
>>>>>>to quantization noise ratio.
>>>>>>
>>>>>>With DAWs that use 32-bit floating point math all the way through,
the
>>>
>>>>>>only
>>>>>
>>>>>>reason for altering the summing
>>>>>>is by error, and that's an error that would actually be hard to make
>and
>>>>> get
>>>>>>past a very basic alpha stage of testing.
>>>>>>There is a small difference in fixed point math and floating point
math,
>>>>> or
>>>>>>at least a theoretical difference in how it affects audio
>>>>>>in certain cases, but not necessarily in the result for calculating
>gain
>>>>> in
>>>>>>either for the same audio file. Where any differences might show up
>is
>>>>>
>>>>>>complicated, and I believe only appear at levels below 24-bit (or in
>>>>>>headroom with tracks pushed beyond 0dBFS), or when/if
>>>>>>there areany differences in where each amplitude level is quantized.
>>>>>>
>>>>>>Obviously there can be differences if the DAW has to use varying bit
>>>>>>depths
>>>>>
>>>>>>throughout a single summing path to accomodate hardware
>>>>>>as well as software summing, since there may be truncation or rounding
>>>
>>>>>>along
>>>>>
>>>>>>the way, but that impacts the lowest bit
>>>>>>level, and hence - spacial reproduction, reverb tails perhaps, and
>>>>>>"depth",
>>>>>
>>>>>>not the levels most music so the differences are most
>>>>>>often more subtle than not. But most modern DAWs have eliminated those
>>>>>
>>>>>>"rough edges" in the math by increasing the bit depth to accomodate
>
>>>>>>normal
>>>>>
>>>>>>summing required for mixing audio.
>>>>>>
>>>>>>So with Lynn's unity gain summing test (A files on the CD I believe),
>>> DAWs
>>>>>
>>>>>>were never asked to sum beyond 24-bits,
>>>>>>at least not on the upper end of the dynamic range, so everything that
>>>
>>>>>>could
>>>>>
>>>>>>represent 24-bits accurately would cancel. The only ones
>>>>>>that didn't were ones that had a different bit depth and/or gain
>>>>>>structure
>>>>>
>>>>>>whether hybrid or native
>>>>>>(e.g. Paris' subtracting 20dB from tracks and adding it to the buss).
>>> In
>>>>>
>>>>>>this case, PTHD cancelled (when I tested it) with
>>>>>>Nuendo, Samplitude, Logic, etc because the impact of the 48-bit fixed
>>> vs.
>>>>>
>>>>>>32-bit float wasn't a factor.
>>>>>>
>>>>>>When trying other tests, even when adding and subtracting gain, Nuendo,
>>>>>
>>>>>>Sequoia and Sonar cancel - both audibly and
>>>>>>visually at inaudible levels, which only proves that one isn't making
>>> an
>>>>>
>>>>>>error when calculating basic gain. Since a dB is well defined,
>>>>>>and the math to add gain is simple, they shouldn't. The fact that
they
>>>>> all
>>>>>>use 32-bit float all the way through eliminates a difference
>>>>>>in data structure as well, and this just verifies that. There was
a
>
>>>>>>time
>>>>>
>>>>>>that supposedly Logic (v3, v4?) was partly 24-bit, or so the rumor
went,
>>>>>>but it's 32-bit float all the way through now just as Sonar,
>>>>>>Nuendo/Cubase,
>>>>>
>>>>>>Samplitude/Sequoia, DP, Audition (I presume at least).
>>>>>>I don't know what Acid or Live use. Saw promotes a fixed point engine,
>>>>> but
>>>>>>I don't know if it is still 24-bit, or now 48 bit.
>>>>>>That was an intentional choice by the developer, but he's the only
one
>>> I
>>>>>
>>>>>>know of that stuck with 24-bit for summing
>>>>>>intentionally, esp. after the Digi Mix system mixer incident.
>>>>>>
>>>>>>Long answer, but to sum up, it is certainly physically *possible* for
>>> a
>>>>>
>>>>>>developer to code something differently intentionally, but not
>>>>>>in reality likely since it would be breaking some basic fixed point
>or
>>>>>>floating point math rules. Where the differences really
>>>>>>showed up in the past is with PT Mix systems where the limitation was
>>>
>>>>>>really
>>>>>
>>>>>>significant - e.g. 24 bit with truncation at several stages.
>>>>>>
>>>>>>That really isn't such an issue anymore. Given the differences in
>>>>>>workflow,
>>>>>
>>>>>>missing something in workflow or layout differences
>>>>>>is easy enough to do (e.g. Sonar doesn't have group and busses the
way
>>>>>>Nuendo does, as it's outputs are actually driver outputs,
>>>>>>not software busses, so in Sonar, busses are actually outputs, and
sub
>>>>>>busses are actually busses in Nuendo. There are no,
>>>>>>or at least I haven't found the equivalent of a Nuendo group in Sonar
>>> -
>>>>> that
>>>>>>affects the results of some tests (though not basic
>>>>>>summing) if not taken into account, but when taken into account, they
>>> work
>>>>>
>>>>>>exactly the same way).
>>>>>>
>>>>>>So at least when talking about apps with 32-bit float all the way
>>>>>>through,
>>>>>
>>>>>>it's safe to say (since it has been proven) that summing isn't different
>>>>>
>>>>>>unless
>>>>>>there is an error somewhere, or variation in how the user duplicates
>the
>>>>>
>>>>>>same mix in two different apps.
>>>>>>
>>>>>>Imho, that's actually a very good thing - approaching a more consistent
>>>>>
>>>>>>basis for recording and mixing from which users can make all
>>>>>>of the decisions as to how the final product will sound and not be
>>>>>>required
>>>>>
>>>>>>to decide when purchasing a pricey console, and have to
>>>>>>focus their business on clients who want "that sound". I believe we
>are
>>>>>
>>>>>>actually closer to the pure definition of recording now than
>>>>>>we once were.
>>>>>>
>>>>>>Regards,
>>>>>>Dedric
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> I the answer is yes, then,the real task is to discover or rather
>>>>>>> un-cover
>>>>>>> what's say: Motu's vision of summing, versus Digidesign, versus
>>>>>>> Steinberg
>>>>>>> and so on..
>>>>>>>
>>>>>>> What's under the hood. To me and others,when Digi re-coded their
>>>>>>> summing
>>>>>>> engine, it was obvious that Pro Tools has an obvious top end (8k-10k)
>>>>>
>>>>>>> bump.
>>>>>>> Where as Steinberg's summing is very neutral.
>>>>>>>
>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>Hi Neil,
>>>>>>>>
>>>>>>>>Jamie is right. And you aren't wacked out - you are thinking this
>>>>>>>>through
>>>>>>>
>>>>>>>>in a reasonable manner, but coming to the wrong
>>>>>>>>conclusion - easy to do given how confusing digital audio can be.
>
>>>>>>>>Each
>>>>>>> word
>>>>>>>>represents an amplitude
>>>>>>>>point on a single curve that is changing over time, and can vary
with
>>>>> a
>>>>>>>
>>>>>>>>speed up to the Nyquist frequency (as Jamie described).
>>>>>>>>The complex harmonic content we hear is actually the frequency
>>>>>>>>modulation
>>>>>>> of
>>>>>>>>a single waveform,
>>>>>>>>that over a small amount of time creates the sound we translate -
>we
>>>
>>>>>>>>don't
>>>>>>>
>>>>>>>>really hear a single sample at a time,
>>>>>>>>but thousands of samples at a time (1 sample alone could at most
>>>>>>>>represent
>>>>>>> a
>>>>>>>>single positive or negative peak
>>>>>>>>of a 22,050Hz waveform).
>>>>>>>>
>>>>>>>>If one bit doesn't cancel, esp. if it's a higher order bit than number
>>>>> 24,
>>>>>>>
>>>>>>>>you may hear, and will see that easily,
>>>>>>>>and the higher the bit in the dynamic range (higher order) the more
>>>>>>>>audible
>>>>>>>
>>>>>>>>the difference.
>>>>>>>>Since each bit is 6dB of dynamic range, you can extrapolate how "loud"
>>>>>
>>>>>>>>that
>>>>>>>
>>>>>>>>bit's impact will be
>>>>>>>>if there is a variation.
>>>>>>>>
>>>>>>>>Now, obviously if we are talking about 1 sample in a 44.1k rate song,
>>>>> then
>>>>>>>
>>>>>>>>it simply be a
>>>>>>>>click (only audible if it's a high enough order bit) instead of an
>>>>>>>>obvious
>>>>>>>
>>>>>>>>musical difference, but that should never
>>>>>>>>happen in a phase cancellation test between identical files higher
>
>>>>>>>>than
>>>>>>> bit
>>>>>>>>24, unless there are clock sync problems,
>>>>>>>>driver issues, or the DAW is an early alpha version. :-)
>>>>>>>>
>>>>>>>>By definition of what DAWs do during playback and record, every audio
>>>>>
>>>>>>>>stream
>>>>>>>
>>>>>>>>has the same point in time (judged by the timeline)
>>>>>>>>played back sample accurately, one word at a time, at whatever sample
>>>>>
>>>>>>>>rate
>>>>>>>
>>>>>>>>we are using. A phase cancellation test uses that
>>>>>>>>fact to compare two audio files word for word (and hence bit for
bit
>>>
>>>>>>>>since
>>>>>>>
>>>>>>>>each bit of a 24-bit word would
>>>>>>>>be at the same bit slot in each 24-bit word). Assuming they are
>>>>>>>>aligned
>>>>>>> to
>>>>>>>>the same start point, sample
>>>>>>>>accurately, and both are the same set of sample words at each sample
>>>>>>>>point,
>>>>>>>
>>>>>>>>bit for bit, and one is phase inverted,
>>>>>>>>they will cancel through all 24 bits. For two files to cancel
>>>>>>>>completely
>>>>>>>
>>>>>>>>for the duration of the file, each and every bit in each word
>>>>>>>>must be the exact opposite of that same bit position in a word at
>the
>>>>> same
>>>>>>>
>>>>>>>>sample point. This is why zooming in on an FFT
>>>>>>>>of the full difference file is valuable as it can show any differences
>>>>> in
>>>>>>>
>>>>>>>>the lower order bits that wouldn't be audible. So even if
>>>>>>>>there is no audible difference, the visual followup will show if
the
>>> two
>>>>>>>
>>>>>>>>files truly cancel even a levels below hearing, or
>>>>>>>>outside of a frequency change that we will perceive.
>>>>>>>>
>>>>>>>>When they don't cancel, usually there will be way more than 1 bit
>>>>>>>>difference - it's usually one or more bits in the words for
>>>>>>>>thousands of samples. From a musical standpoint this is usually
in
>>> a
>>>>>>>>frequency range (low freq, or high freq most often) - that will
>>>>>>>>show up as the difference between them, and that usually happens
due
>>> to
>>>>>>> some
>>>>>>>>form of processing difference between the files,
>>>>>>>>such as EQ, compression, frequency dependant gain changes, etc. That
>>> is
>>>>>>> what
>>>>>>>>I believe you are thinking through, but when
>>>>>>>>talking about straight summing with no gain change (or known equal
>
>>>>>>>>gain
>>>>>>>
>>>>>>>>changes), we are only looking at linear, one for one
>>>>>>>>comparisons between the two files' frequency representations.
>>>>>>>>
>>>>>>>>Regards,
>>>>>>>>Dedric
>>>>>>>>
>>>>>>>>> Neil wrote:
>>>>>>>>>> "Dedric Terry" <dedric@echomg.com> wrote:
>>>>>>>>>>> The tests I did were completely blank down to -200 dB (far below
>>> the
>>>>>>>
>>>>>>>>>>> last
>>>>>>>>>>
>>>>>>>>>>> bit). It's safe to say there is no difference, even in
>>>>>>>>>>> quantization noise, which by technical rights, is considered
below
>>>>> the
>>>>>>>
>>>>>>>>>>> level
>>>>>>>>>>
>>>>>>>>>>> of "cancellation" in such tests.
>>>>>>>>>>
>>>>>>>>>> I'm not necessarily talking about just the first bit or the
>>>>>>>>>> last bit, but also everything in between... what happens on bit
>>>>>>>>>> #12, for example? Everything on bit #12 should be audible, but
>>>>>>>>>> in an a/b test what if thre are differences in what bits #8
>>>>>>>>>> through #12 sound like, but the amplutide is stll the same on
>>>>>>>>>> both files at that point, you'll get a null, right? Extrapolate
>>>>>>>>>> that out somewhat & let's say there are differences in bits #8
>>>>>>>>>> through #12 on sample points 3, 17, 1,000, 4,523, 7,560, etc,
>>>>>>>>>> etc through 43,972... Now this is breaking things down well
>>>>>>>>>> beyond what I think can be measured, if I'm not mistaken (I
>>>>>>>>>> dn't know of any way we could extract JUST that information
>>>>>>>>>> from each file & play it back for an a/b test; but would not
>>>>>>>>>> that be enough to have to "null-able" files that do actually
>>>>>>>>>> sound somewhat different?
>>>>>>>>>>
>>>>>>>>>> I guess what I'm saying is that since each sample in a musical
>>>>>>>>>> track or full song file doesn't represent a pure, simple set of
>>>>>>>>>> content like a sample of a sine wave would - there's a whole
>>>>>>>>>> world of harmonic structure in each sample of a song file, and
>>>>>>>>>> I think (although I'll admit - I can't "prove") that there is
>>>>>>>>>> plenty of room for some variables between the first bit & the
>>>>>>>>>> last bit while still allowing for a null test to be successful.
>>>>>>>>>>
>>>>>>>>>> No? Am I wacked out of my mind?
>>>>>>>>>>
>>>>>>>>>> Neil
>>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>>
>
|
|
|
Goto Forum:
Current Time: Fri Nov 29 23:22:50 PST 2024
Total time taken to generate the page: 0.20071 seconds
|