how do different frequencies playing at the same time get represented?
how do different frequencies playing at the same time get represented?
- Subject: how do different frequencies playing at the same time get represented?
- From: Ben Dougall <email@hidden>
- Date: Fri, 14 Jan 2005 15:11:35 +0000
hello,
what i want to know about sound on a computer is, how do two continuous
tones occurring at the same time get represented? it seems similar to
wanting to, with images, wanting to represent say red and yellow in the
same pixel (which would actually just end up as orange. not red, and
yellow, at the same time.) in fact colour is split up into several
elementary colours (like red green blue) and they are treated with
different values which are called channels, and a pixel can be made up
of a mixture of the channels, but as i say, those various colours end
up as one colour. and that isn't the way sound channels are anyway, as
each channel is a stream of audio (like one of the two streams of
stereo) -- all and multiple tones can be represented in one audio
channel, so channels in audio are irrelevant to what i'm asking about i
think. how does a mono, single channel stream of sound manage to
represent two tones occurring at the same time without merging the two
tones into one?
just to go on and try and illustrate exactly what i'm asking:
situation 1:
two different tones/frequencies playing at the same time. one who's
tone is, say, 10. the other who's tone value is 20.
situation 2:
a single tone playing who's tone is 15
how is the difference between those two situations achieved/represented
in digitised audio? how do the tones 10 and 20 playing at the same time
not end up as the same as a single 15 tone?
any pointers to relevant info appreciated.
thanks, ben.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden