Re: CoreAudio driver's convertInputSamples()
Re: CoreAudio driver's convertInputSamples()
- Subject: Re: CoreAudio driver's convertInputSamples()
- From: Doug Wyatt <email@hidden>
- Date: Mon, 21 Mar 2005 09:21:33 -0800
On Mar 21, 2005, at 8:13, Tommy Schell wrote:
Hi,
I have a question about implementing convertInputSamples() in a
CoreAudio driver.
I was running Hacktv with my driver selected for audio and my
device providing 24 bit, signed, low byte aligned audio data,
and the sound produced was a constant fuzzy sound.
I realized that in my convertInputSamples routine, I was converting
from 16 bit to 32 bit float, instead of 24 to 32.
So I tried using some of the 24 bit to 32 bit float conversion
routines from PhantomAudioDriver:
NativeInt32ToFloat32
SwapInt32ToFloat32,
and then specifying the depth as 24.
These attempts produced no sound whatsoever! And I know that
convertInputSamples is being called as it should be.
Any ideas?
I haven't tested them lately, but I suspect that those routines,
despite their intentions, aren't correct for low-aligned cases like
this (any time mBitsPerSample != mBytesPerFrame * 8). It looks like
the code is assuming that the high 8 bits are sign-extended from the
(1 << 23) bit in the case of 24 bit samples. To fix this, you could
add two instructions per sample, to shift the ints 8 bits left and
then 8 bits right before storing them into the int->float memory buffer.
You could advance your pointer into the source samples by one byte
and add a trailing 0 byte, then have it treat the samples as signed
32-bit ints, but then you'd be loading from unaligned addresses.
Doug
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden