I barely remember the name FastCGI. That said, not much has changed in this area in a long, long time.
From:
Michael Kondratov <email@hidden>
Date: Wednesday, September 14, 2016 at 1:41 PM
To: Chuck Hill <email@hidden>
Cc: WebObjects-Dev Mailing List List <email@hidden>
Subject: Re: WOWorkerThreadCountMax
Chuck,
Thank you! I will look through the presentation. I’ve turned of connection pooling and the system is far more stable. Appears WO Apache Adaptor should have it disabled. At one presentation you’ve mentioned FastCGI adaptor. Is it still an
option?
On Sep 14, 2016, at 1:55 PM, Chuck Hill <email@hidden> wrote:
The apps use a worker thread to respond to wotaskd, so if there are no threads there is no response. Bad Things ™ ensue.
My wotask Internals presentation from WOWODC 2014 might have some points of interest. I barely recall what is in it.
From: Michael Kondratov <email@hidden>
Date: Wednesday, September 14, 2016 at 10:27 AM
To: Chuck Hill <email@hidden>
Cc: WebObjects-Dev Mailing List List <email@hidden>
Subject: Re: WOWorkerThreadCountMax
All kind of strange things start to happen once we hit 100-200 requests per second loads. If an instance gets overloaded, WOTaskD becomes unresponsive and Apache stalls after as well. Quickly killing the stalled
instance brings WOTask and Apache back to live. Maybe setting Apache Adaptor to poll wotask at 10 minute and not 10 second interval could fix that?
On Sep 14, 2016, at 1:20 PM, Chuck Hill <email@hidden> wrote:
I am not sure how connection pooling, which happens in the Apace process, and Keep-Alive interact. I thought the former was for the ServerSocket, but I could be
very wrong. I don’t know why you are seeing what you are seeing below.
From: Michael Kondratov <email@hidden>
Date: Wednesday, September 14, 2016 at 6:08 AM
To: Chuck Hill <email@hidden>
Cc: WebObjects-Dev Mailing List List <email@hidden>
Subject: Re: WOWorkerThreadCountMax
I have noticed that the number of worker threads immediately goes us when we set connection pool to 1. If it is set to zero, worker threads stop at about 70, but we see 50% system time CPU utilization. Once pooling
is enables, CPU utilization drops, but workers grow to the max setting.
On Sep 13, 2016, at 6:44 PM, Michael Kondratov <email@hidden> wrote:
We are handling around 100 requests per second spread over 5-10 application instances. We do have KeepAlive enabled in Apache. How would I manage that in WO? If the application thread count grows to 300 threads
or so, does it mean that at one time we had a back log of ~250 requests or so?
On Sep 13, 2016, at 6:40 PM, Chuck Hill <email@hidden> wrote:
Ignoring Keep-Alive, you need to manage this setting, the Listen Queue Size, and number of instances to ensure that your app instances don’t build up a backlog
of requests that will take longer to process than your users are willing to wait. Otherwise, your instances are going to be calculating responses that are just going to encounter a broken pipe when attempting to respond to the client. That is useless processing.
256 is way, way too high unless you are processing a lot of very short, quick responses. Relating this to the number of Apache processes is pretty meaningless. Apache is not doing much relative to your app.
Request with Keep-Alive complicate this significantly as they tie up a worker thread until the connection is closed.
Does it
make sense to set the value equal to or greater than the number of active apache processes? Our server is receiving more traffic than usual and each application is hitting the default limit of 256. I assume it is due to each apache process trying to maintain
a connection to each instance. We typically see apache grow to 1000 processes.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Help/Unsubscribe/Update your Subscription:
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
|