Re: Nonlinear limits on the number or rate of NSURLConnections
Re: Nonlinear limits on the number or rate of NSURLConnections
- Subject: Re: Nonlinear limits on the number or rate of NSURLConnections
- From: Jeff Johnson <email@hidden>
- Date: Wed, 26 Aug 2009 09:55:12 -0500
Hi Jerry.
I've noticed the same kind of behavior with my own app Vienna <http://www.vienna-rss.org/vienna2.php
> and would be very interested in the answer to your question, though
unfortunately I don't have an answer to it myself.
I will try to do more testing of this in the next week.
-Jeff
On Aug 26, 2009, at 2:00 AM, Jerry Krinock wrote:
During 2005 I wrote an bookmark-checking application which iterates
through a user's bookmarks (typically Safari bookmarks), sends a GET
request to each using an asynchronous NSURLConnection, then cancels
and releases the connection after it gets the header. (The reason I
don't send a HEAD request is that some sites don't give the expected
response.)
Anyhow, I spent quite a bit of time testing back then, and found
that I could send 20 or even 50 requests per second on my
residential DSL line, and although this would slow down web browsing
and gaming performance by other apps and/or other computers, it
still worked.
But in the last year or so I've noticed a kind of nonlinear
behavior. Typically, after several hundred requests, sent at 5-20
per second (i.e. 1-3 minutes), all further requests end in a
timeout. It doesn't matter if I set the timeout at 20 or 60
seconds. Like the spigot gets closed -- all packets are dropped.
I've verified in the debugger that -cancel and -dealloc are being
sent to the NSURLConnections.
Now, my app is set to limit the number of outstanding requests to
64, so when these timeouts start happening, the request rate slows
down. Then after a minute or so, suddenly, requests start getting
responses again. But then as my app piles them up, after another
minute or so, the spigot closes again -- all requests time out.
A couple users have reported this, and at first I thought it was
something changed in the Mac OS. I see it on both 10.4 and 10.5,
but I know that Apple often updates the URL Loading System in older
OS dot releases too. But just now I verified that another Mac on
the network will also fail to load web pages during the period when
I'm getting timeouts.
So now it's looking more like ISPs are doing this. (Mine here is
Earthlink with PPPoE.) Two things seem odd...
1. The shutoff seems to be triggered by too many connections, not
by too many bits per second. Sending, say, 5 requests per second
and receiving, say, one 1500-byte packet with each one, is a rather
modest 60 Kb/sec.
2. The shutoff is not just limiting my bit rate, it's a complete
shutdown for a minute or so. Like "You were bad so we're going to
drop all of your packets for a minute or so."
If anyone has any experience with this which could confirm that this
is indeed the work of ISPs and not something that the Mac OS is
doing, that would be great.
I established the limit of 64 outstanding NSURLConnections by noting
that if I had no limit, things would crash if there were more than
about 700 outstanding, and allowing more than 64 did not speed up
the process. Does anyone know how many ports an NSURLConnection
should consume? I notice that the marginal increase in the number
of ports I'm reported using in Activity Monitor is about 4 per
NSURLConnection. Does that make sense? Maybe redirects add to it
also?
Yes, I need to write a little test tool and get some data with
different internet connections, but I'd appreciate if someone could
help me understand where I should be "pushing". Or maybe some
resources for further reading!
Sincerely,
Jerry Krinock
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Macnetworkprog mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden