|[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]|
Thanks for your answer. I've noticed that the bitrate drops to what I calculated after a few minutes. That answers my question. You have mentioned that the rate increases after a congestion. Is this feedback through RTCP or does the server use some RTSP mechanism? I've noticed that a Real Client uses SET_PARAMATER to tell the server about the connection. Is there any such proprietary mechanism in DSS? Thanks, Nagarjuna -----Original Message----- From: john [mailto:email@hidden] Sent: Friday, August 02, 2002 4:06 PM To: Venna, Nagarjuna Cc: john; 'email@hidden' Subject: Re: X-QT Are you using a QuickTime 5 or 6 client to play your movies? QuickTime 5 and 6 clients request over-buffered streams from the server. This is probably what you are seeing. The streams are initially sent faster than the authored bit rate in order to allow the client to collect a buffer (default in DSS 4.1 is 25 seconds). After the client has buffered the stream, the stream rate will drop back to the authored rate. The buffer allows the client to maintain playback during short periods of congestion. While streaming, the rate will drop during congestion, play at the authored rate once the buffer is full, or increase to refill the client's buffer after periods of congestion. To see if this is what you are observing, pick a two minute or more movie to play. Watch the rate until about a minute has passed. The rate should drop down to your calculated rate. John On Friday, August 2, 2002, at 11:51 AM, Venna, Nagarjuna wrote: > Hello, > > I've a question regarding Quick Time files served by Darwin Streaming > Server. When I send a DESCRIBE command to the server for a simple .mov > file, > I get a response indicating that the audio track is sampled at 22050 > Hz. > When I play the media, the RTP timestamp is incremented by 2048 for > each > audio packet. The RTP encoding for QuickTime Media spec > http://developer.apple.com/quicktime/icefloe/dispatch026.html) states > that > RTP Timestamp is incremented according to the current time scale and > the > first RTP packet states that the current timescale is 0x5622(=22050). > An > increment of 2048 would correspond to about 92 ms(2048 * 1000 /22050). > But I > see packets are generated roughly at the rate of one every 9-10 ms. Can > somebody tell me where I'm going wrong? > > Thanks, > Nagarjuna > _______________________________________________ > streaming-server-developers mailing list | > email@hidden > Help/Unsubscribe/Archives: > http://www.lists.apple.com/mailman/listinfo/streaming-server-> developers > Do not post admin requests to the list. They will be ignored. _______________________________________________ streaming-server-developers mailing list | email@hidden Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/streaming-server-developers Do not post admin requests to the list. They will be ignored.
Visit the Apple Store online or at retail locations.
Copyright © 2011 Apple Inc. All rights reserved.