• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: linear PCM
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: linear PCM


  • Subject: Re: linear PCM
  • From: Brian Willoughby <email@hidden>
  • Date: Mon, 14 Jan 2008 02:15:47 -0800

Answers to your first 3 questions depend upon how you have written your code. If you're developing a MusicDevice AudioUnit, then I believe you cannot determine the final output device or its bit depth. There may be a way to query the host application, but there could be other processing between your output and the device that would affect clipping. If not an AU, then you're in control of the output device, or if you use the Default Audio Output Device, then there are ways to determine which is currently selected. Answers to all of these questions about output devices are available in the CoreAudio Mailing List archives, or you can look through the CoreAudio documentation and examples.

As to your final question: Yes, after scaling, you can put the float values directly into the buffers. However, you should note that (2*pi*f*t) does not account for sampling rate. You'll need to modify that parameter if you want frequency f to be accurate when the samples are played back.

Brian Willoughby
Sound Consulting


On Jan 13, 2008, at 20:06, Roland Silver wrote:
Some more questions:
* What is the "device" which outputs to the Mac speakers or to the output audio jack?
* What is the bit depth of that device? OR
* How do I query that device for its bit depth?
* after scaling by A to avoid clipping, do I put the scaled (float) values A*v(t) directly into the buffers?
--RS
-----------------------------------------------
On 2008Jan13, at 19:21, Brian Willoughby wrote:
sin() is a mathematic function which just happens to meet the following constraints: -1.0 <= sin(x) <= +1.0
This is very, very close to meeting the limited range of two's- complement fixed point (the format used by digital-to-analog converters, even when the software is working with float).
Other functions have different ranges. The following is only appropriate for sin() while you would need different solutions for other functions.


You should not put the v(t) values directly into the buffers because the +1.0 results from the sin() function will clip the output device. You need to multiply the function by a scaling factor.

v(t) = A*sin(2*pi*f*t)

The scaling value I've given above is probably the safest universal value you could use without testing the specific bit depth of the output device that is currently in use. I suppose you might run into an 8-bit device, or some other device with less than 16 bits of accuracy, in which case you would still clip unless you compute a smaller scaling factor. But in those cases you're getting such horrible signal-to-noise ratios that perhaps harmonic clipping distortion could be the least of your worries.

If you want to write all the code necessary to query the current output device for its bit depth, as well as register for all notifications which might result if the user were to change the bit depth while your program is running, then you could compute the scaling factor as follows:

double A = (pow(2., N - 1) - 1. / pow(2., N - 1));

... where N is the bit depth, e.g. 16, 20, 24, etc.


On Jan 13, 2008, at 17:09, Roland Silver wrote:
Well, let me get specific: Suppose I generate a signal whose instantaneous value is v(t) = sin(2*pi*f*t) (both channels), for t = 0, dt, 2*dt, 3*dt, etc, where dt = 1/44,100 sec, and f = 440 Hz. I want to affright the air with that signal by outputting the successive samples to Audio Queue buffers as described in Chapter 3 (Playing Audio) of the Audio Queue Services Programming Guide.


I propose to set the mFormatID field of AudioStreamBasicDescription = kAudioFormatLinearPCM, with appropriate values for the other fields, specifying (say) two channels, 32-bit floating-point data format.

My question is: do I put the successive v(t)s directly into the buffers, or do I need to transform them first, somehow?
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: linear PCM
      • From: Roland Silver <email@hidden>
References: 
 >linear PCM (From: Roland Silver <email@hidden>)
 >Re: linear PCM (From: Brian Willoughby <email@hidden>)
 >Re: linear PCM (From: Roland Silver <email@hidden>)

  • Prev by Date: Re: DLSMusicDevice problems in Leopard
  • Next by Date: Some more questions about sample rate notifications
  • Previous by thread: Re: linear PCM
  • Next by thread: Re: linear PCM
  • Index(es):
    • Date
    • Thread