• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Code needed for reading, processing, writing sound files
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Code needed for reading, processing, writing sound files


  • Subject: Re: Code needed for reading, processing, writing sound files
  • From: tahome izwah <email@hidden>
  • Date: Wed, 1 Dec 2010 07:24:29 +0100

The Dirac 3 example project comes with code that reads, processes and
writes sound files. I've (mis)used it (for purposes other than time
stretching a couple of times): http://dirac.dspdimension.com
You will want to look at the iOS 4 example project (can be used on
MacOS X as well) as this uses the ExtAudioFile API rather than their
own AIFF library.

HTH
--th

2010/12/1 Pi <email@hidden>:
> Apologies for that giant brain fart.
>
> I should have extracted a clear question from my tangled brain, rather than
> spamming you guys.
> What I think I need to do is create some code that lets me read in a bunch
> of raw sound files (ie a complete sound font for say a guitar),  processes
> these files (to construct chords),  and outputs the result as another set of
> files.
> My question: can anyone point me to some code that does something close to
> this task, that would save me from having to do everything from scratch?
>  that's all I should have posted...
> Sam
> PS  probably off topic, but if it is of any interest, this is what I am
> working on:
> http://imagebin.org/125562
> The difficulty I face is how to voice the chords... if I just do {C4 E4 G4}
> for the C major chord,  and {G4 B5 D5} for G, etc,  it is going to sound
> horrible
> A pianist simply doesn't move from C to G like that.  There is an art to
> voicing, so that each note attempts to move a minimal distance to its new
> resolution.
>  And I can't see any formula for depicting this in a way that is key
> agnostic.
>  So I am attempting instead to play all Cs Es and Gs, to create a sound
> texture for 'C major'
>  If I put all of the  respective amplitudes under a bell curve,  each major
> or minor chord should have its energy centred around the same point,  so the
> effect would be that the texture changes without giving any overt / crude
> impression of moving up / down
> Does this make some sense now?   The task becomes:  how to construct 24
> textures?
>
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

References: 
 >Code needed for reading, processing, writing sound files (From: Pi <email@hidden>)

  • Prev by Date: Code needed for reading, processing, writing sound files
  • Previous by thread: Code needed for reading, processing, writing sound files
  • Index(es):
    • Date
    • Thread