• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: How to capture audio generated by a render callback, iOS
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How to capture audio generated by a render callback, iOS


  • Subject: Re: How to capture audio generated by a render callback, iOS
  • From: Paul Davis <email@hidden>
  • Date: Wed, 05 Dec 2012 12:32:40 -0500

it is not possible to write NON-interleaved MULTI-channel files. it is just logically nonsense.

if you want to write multiple channels to a single file, the file must be interleaved. basically. this isn't a CAF or CoreAudio restriction - you will find it an all APIs on all platforms for this sort of thing.

the only way you could ever get around this is with an API that allowed you to state how many samples the file would contain. that would generate a file that very few other applications could read correctly, and anyway, it would not work for streaming.


On Wed, Dec 5, 2012 at 12:11 PM, Tim Kemp <email@hidden> wrote:
Joel, thanks; I already tried that. It doesn't appear to be supported for LPCM CAF files, at least not on iOS. The format flags I have are actually the only kind that it can support.

On 5 Dec 2012, at 11:56, Joel Reymont wrote:

Add the non-interleaved flag here. 

On my iPhone, can't look it up. 

        _outputASBD.mFormatFlags       =kAudioFormatFlagIsSignedInteger |kAudioFormatFlagIsBigEndian |kAudioFormatFlagIsPacked;
--
http://www.linkedin.com/in/joelreymont

On Wednesday, December 5, 2012 at 4:41 PM, Tim Kemp wrote:

Ah. Thanks. I've posted a more recent follow up a couple of hours ago after I started to come to your conclusion.

Currently I'm generating non-interleaved audio (ie, two buffers), which is working fine with the RemoteIO unit. What's the best way for me to convert that to interleaved data for Extended Audio File Services? I don't want to be mallocing a new AudioBuffer in my render callback.

Could I instead have my RemoteIO use interleaved data? If I go that route, I'm not sure how I would set up the ASBD any differently.

Thanks

On 5 Dec 2012, at 11:32, Joel Reymont wrote:

Stack trace says your buffer list is bogus. 

You need to provide a buffer list with at least one buffer with interleaved samples. 

Your mData would have a layout like this [LRLRLR...] where L and R are SInt16 each and LR constituted one frame.

The number of frames in your mData would match numFrames passed to ExtAudioFileWriteAsync. 

--
http://www.linkedin.com/in/joelreymont

On Wednesday, December 5, 2012 at 1:53 AM, Tim Kemp wrote:

Thanks Aran. You're doing it more or less the same way I am, and I've also now based mine on the example from Core Audio Public Utilities (CAAudioUnitOutputCapture) as it looks like you did.

I'm utterly at a loss here. I had some problems with my ASBDs which I've now fixed. If I use synchronous calls (for debugging) it still fails with a -50 on the recorder priming call, and still gives -50s on each write. If I use async calls then the priming call goes through fine, but the actual recording calls crash with the EXC_BAD_ACCESS as before.

Here are my ASBDs:

        // Set desired audio output format
        memset(&_outputASBD, 0, sizeof(_outputASBD));
        _outputASBD.mSampleRate        = 44100.0;
        _outputASBD.mFormatID          = kAudioFormatLinearPCM;
        _outputASBD.mFormatFlags       = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsPacked;
        _outputASBD.mBytesPerPacket    = sizeof(AudioUnitSampleType);
        _outputASBD.mBytesPerFrame     = sizeof(AudioUnitSampleType);
        _outputASBD.mFramesPerPacket   = 1;
        _outputASBD.mChannelsPerFrame  = 2;
        _outputASBD.mBitsPerChannel    = 16;
        
        // Set recording format
        memset(&_fileASBD, 0, sizeof(_fileASBD));
        _fileASBD.mSampleRate          = 44100.00;
        _fileASBD.mFormatID            = kAudioFormatLinearPCM;
        _fileASBD.mFormatFlags         = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsPacked;;
        _fileASBD.mBytesPerPacket      = sizeof(AudioUnitSampleType);
        _fileASBD.mBytesPerFrame       = sizeof(AudioUnitSampleType);
        _fileASBD.mFramesPerPacket     = 1;
        _fileASBD.mChannelsPerFrame    = 2;
        _fileASBD.mBitsPerChannel      = 16;
        _isRecording = NO;
        _recordedSamples = 0;

Here I set up the file:

    CFURLRef urlRef = (__bridge CFURLRef) url;
    checkError(ExtAudioFileCreateWithURL(urlRef,
                                         kAudioFileCAFType,
                                         &_fileASBD,
                                         NULL,
                                         kAudioFileFlags_EraseFile,
                                         &_recFileRef),
               "ExtAudioFileCreateWithURL failed: creating file", false);
    checkError(ExtAudioFileWriteAsync(_recFileRef,
                                      0,
                                      NULL),
               "ExtAudioFileWriteAsync failed: initializing write buffers", false);

Here I write to the file in my callback:

    checkError(u failed", false);

And I get backtraces like this when I try to record:

* thread #8: tid = 0x2503, 0x34213a68 libsystem_c.dylib`memmove$VARIANT$CortexA9 + 168, stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
    frame #0: 0x34213a68 libsystem_c.dylib`memmove$VARIANT$CortexA9 + 168
    frame #1: 0x36743c70 AudioToolbox`AudioRingBuffer::Store(AudioBufferList const*, unsigned long, long long) + 576
    frame #2: 0x367cf10e AudioToolbox`ExtAudioFile::WriteFramesAsync(unsigned long, AudioBufferList const*) + 490
    frame #3: 0x367d4300 AudioToolbox`ExtAudioFileWriteAsync + 172

The line where the debugger stops is here in bold underline in the disassembly:

0x367d42e2:  cbz    r2, 0x367d42f0            ; ExtAudioFileWriteAsync + 156
0x367d42e4:  movs   r5, #1
0x367d42e6:  str    r5, [sp, #20]
0x367d42e8:  movs   r1, #0
0x367d42ea:  mov    r0, r2
0x367d42ec:  bl     0x366fac00                ; CrashIfClientProvidedBogusAudioBufferList
0x367d42f0:  movs   r5, #2
0x367d42f2:  ldr    r1, [sp, #8]
0x367d42f4:  str    r5, [sp, #20]
0x367d42f6:  mov    r0, r4
0x367d42f8:  movs   r5, #0
0x367d42fa:  ldr    r2, [sp, #12]
0x367d42fc:  bl     0x367cef24                ; ExtAudioFile::WriteFramesAsync(unsigned long, AudioBufferList const*)
0x367d4300:  b      0x367d4320                ; ExtAudioFileWriteAsync + 204
0x367d4302:  mov    r0, r5
0x367d4304:  blx    0x368add00                ; symbol stub for: __cxa_begin_catch
0x367d4308:  ldr.w  r5, [r0, #256]
0x367d430c:  b      0x367d4316                ; ExtAudioFileWriteAsync + 194
0x367d430e:  mov    r0, r5
0x367d4310:  blx    0x368add00                ; symbol stub for: __cxa_begin_catch
0x367d4314:  ldr    r5, [r0]
0x367d4316:  mov.w  r0, #4294967295
0x367d431a:  str    r0, [sp, #20]
0x367d431c:  blx    0x368add20                ; symbol stub for: __cxa_end_catch
0x367d4320:  add    r0, sp, #16
0x367d4322:  blx    0x368adb30                ; symbol stub for: _Unwind_SjLj_Unregister


The only other difference is that I'm now on the device itself.

Quite stuck now; I can usually muddle stuff out with a bit of Googling around but this has me stumped.

Thanks

On 4 Dec 2012, at 18:28, Aran Mulholland wrote:

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:



 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden


 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

References: 
 >How to capture audio generated by a render callback, iOS (From: Tim Kemp <email@hidden>)
 >Re: How to capture audio generated by a render callback, iOS (From: Gregory Wieber <email@hidden>)
 >Re: How to capture audio generated by a render callback, iOS (From: Tim Kemp <email@hidden>)
 >Re: How to capture audio generated by a render callback, iOS (From: Tim Kemp <email@hidden>)
 >Re: How to capture audio generated by a render callback, iOS (From: Aran Mulholland <email@hidden>)
 >Re: How to capture audio generated by a render callback, iOS (From: Tim Kemp <email@hidden>)
 >Re: How to capture audio generated by a render callback, iOS (From: Tim Kemp <email@hidden>)
 >Re: How to capture audio generated by a render callback, iOS (From: Joel Reymont <email@hidden>)
 >Re: How to capture audio generated by a render callback, iOS (From: Tim Kemp <email@hidden>)

  • Prev by Date: Re: How to capture audio generated by a render callback, iOS
  • Next by Date: Re: How to capture audio generated by a render callback, iOS
  • Previous by thread: Re: How to capture audio generated by a render callback, iOS
  • Next by thread: Re: How to capture audio generated by a render callback, iOS
  • Index(es):
    • Date
    • Thread