Re: [proaudio] Real-time for audio |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/proaudio Archives
]
- To: proaudio@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [proaudio] Real-time for audio
- From: Gavin Pryke <gavinlee303@xxxxxxxxxxxxxx>
- Date: Mon, 10 May 2010 20:03:01 +0100
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:received:received:from:reply-to:to:subject:date :user-agent:references:in-reply-to:mime-version:content-type :content-transfer-encoding:message-id; bh=+DW4AXjYC5lFt4F/P+QiT//iwnmEeP+Kk2B49JXDuqA=; b=DfmUqfL6PMfCRGkmnvlCMmNTEvNZ/uWJ5uQmN0FQ3TUFPnwsJzCUZrB5AxqvNNOWKZ 6Y/Db10u2POiBf8cbcRS8/Qd8sgWYuPMiGUzkncxElpgnBn7M0ifSS1KHtZPdxkik4eK y32xABCZJn6k0ng5Q51+b7snKEPA+GZvMkMGw=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=from:reply-to:to:subject:date:user-agent:references:in-reply-to :mime-version:content-type:content-transfer-encoding:message-id; b=bF3XceOFBO5MYTg5kNkseL6GSWewELvSci2MMLvElzSjXg48yQYHQlm6eXgR/ZYo7u EN56wUyRe2vSeyfWzqb8VyxXLXBx4maCOWrPgPzaerVr8JPwBk46LrkHBIeiwVeJvp9H RwpQZqAgAQ+jVCTDXyScj0JHfINcvKB4GtCfw=
On Sunday 09 May 2010 15:55:58 Mark Knecht wrote:
> On Sat, May 8, 2010 at 6:31 PM, Grant <emailgrant@xxxxxxxxx> wrote:
> >>>> With a rt kernel, you can setup both the hardware priorities and the
> >>>> software priorities. That mean that tasks accessing some hardware,
> >>>> the sound card, will have the priority over the other tasks, and that
> >>>> tasks executed by a given user or group [the audio group] will have
> >>>> priority over the other. rtirq will dot that for you. Just emerge it
> >>>> and add it into the default run level.
> >>>
> >>> Is jack necessary for me then? What I want to do is use my USB DAC
> >>> and mpd (music player app) in real time. Maybe jack is necessary if I
> >>> want to know *how* real time it is?
> >>>
> >>>> Be aware that if you don't really need a rt kernel, it is best to not
> >>>> use it, because such a kernel can cause compilation failures with
> >>>> gcc. http://bugs.gentoo.org/show_bug.cgi?id=190462
> >>>> http://bugs.gentoo.org/show_bug.cgi?id=20600
> >>>> For that raeson, if you want to use a rt kernel, it is good practice
> >>>> to have another kernel of the same version, vanilla or gentoo
> >>>> sources, and reboot on this non rt kernel when using emerge.
> >>>
> >>> I noticed this when trying to emerge nvidia-drivers.
> >>>
> >>> Thanks,
> >>> Grant
> >>
> >> Hi Grant,
> >> I thought we had this discussion maybe 6 months ago? Maybe I'm
> >> wrong about that.
> >>
> >> For pure audio playback the real-time kernel and Jack provide
> >> almost NO value. The real-time kernel is able to give more responsive
> >> attention to certain drivers or apps. Jack is one of them and if your
> >> mpd app - whatever that is - is a Jack app then it inherits the
> >> attention. However this only matters when you need to do something in
> >> 'real time'. There is nothing, in my opinion, 'real time' about
> >> reading a stream of pre-recorded data off a disk, buffering it up and
> >> then sending it to a sound card. Why does the sound card need to
> >> receive data in 5mS vs 25mS? The only difference that the buffer time
> >> causes is to delay the start of playback. Once playback starts as long
> >> as there is no time the buffer runs dry there is no difference in the
> >> sound you hear.
> >>
> >> I think that for your application using the real-time kernel and
> >> Jack is adding nothing but complication and that's likely to cause
> >> more problems than it's going to solve.
> >
> > How low of a latency are you able to keep stable? The output of my
> > jackd calculates 5.8ms. Here's the command I'm using now:
> >
> > jackd -R -P85 -c h -d alsa -d hw:1,0 -r44100 -p256 -n3 -S -P
>
> Again, forcing low latency numbers in Jack will only lead to problems
> in a playback-only system. All Jack is doing is pushing the hardware
> harder for no technical advantage.
>
> Low latency makes a BIG difference when RECORDING audio.
>
> 1) A singer is listening to a recording she will sing with. She wants
> to hear her voice with reverb mixed with the audio.
> 2) The PC starts streaming audio from disk and delivering it to the
> sound card. This takes 1 Jack latency time period.
> 3) The time to sing comes so she does. Note that what she is hearing
> is 1 time unit behind what the computer is doing
> 4) Her air-born waveforms are first converted to electricity by the
> mic, digitized by an external ADC and fed to a digital input on a
> sound card. Let's assume this much is instantaneous.
> 5) The computer has to accept this data from the sound card. This
> takes 1 more Jack latency time period. Her digitized voice data is now
> in the computer but is 2 Jack time units behind the data coming off of
> the disk and 1 Jack time unit from when she made the sounds.
> 6) The computer adds reverb. Let's assume this is instantaneous in
> terms of CPU processing. Depending on the reverb unit it may add Jack
> cycles, but let's ignore those as they are not important here.
> 7) The computer sends her voice and the current data back out to her
> headphone mix. The audio going out, consisting of the recorded data
> from disk mixed with live voice with reverb is always out of sync by 2
> Jack time units.
>
> If the recording engineer wants to then he can take he voice and nudge
> it forward on disk by 2 Jack time units to be 'exactly aligned' or 1
> Jack time unit to be 'aligned like she heard it'. This is optional. I
> seldom ever do it, although for drums recorded late in a session I am
> more sensitive to this.
>
> Respectfully, I think you are fooling yourself when you say you 'hear'
> a difference when the audio is delivered from your disk to your sound
> card faster. It's the same audio delayed by 5.6 mS. As long as the
> system is set up correctly the data is completely unchanged except for
> the latency of getting it delivered. Since you ___cannot___ hear
> 'delay' except by relative measure - and you have ___no___ relative
> measure when doing only playback, it's impossible to hear whether I'm
> running at 1.2mS, 5.6mS or 25mS.
>
> I do not subscribe to the 'my numbers are better than your numbers'
> crowd very much. My latency is stable at 1.2mS or lower when using the
> HDSP9652 with external DACs, but again, it's immaterial for a playback
> only system. When I'm purely mixing audio I set latency at 25mS to
> reduce the chances of an xrun to essentially zero. (Although that's
> not true. There's always a statistical chance of something happening.)
> I hear no difference in the mix running 1.2mS or 25mS.
>
> Cheers,
> Mark
Just an email to say thanks for this concisely written information. I did not
know that the real-time kernel can cause compilation failures.
To be honest I've not had the need to use a real-time kernel yet as my audio
workstation at home is stuck on Windows 98 because of a rather special audio
card that I really don't want to part with. It's a pain to use that OS but I
can't upgrade past Windows 98 or I will lose use of the card.
I plan to upgrade that ancient machine so I can dual boot and use Linux on it
as well, and this information will become very useful at that point. I will
lack the use of the card on Linux because there are no drivers for it at
present and I doubt there will ever be. At least I will be able to test the
real-time kernel and know how to set it up and what to expect. Thanks again
for the help and sorry for the rambling.
Best regards
Gavin