Re: Hard-limits for kern.maxproc
Re: Hard-limits for kern.maxproc
- Subject: Re: Hard-limits for kern.maxproc
- From: mm w <email@hidden>
- Date: Fri, 30 Jan 2009 13:18:58 -0800
Thanks Terry for your answer, on the iPhone you should think to do a
contest personally I give up after 50 words :)
anyway you gave two solutions for the kernel concern:
1- rebuilding with setting up hard_limit
2- write a tool with lib kvm
as I found the second point a good TP (Travaux Pratiques) to do
I started to play in this way:
Materials & Methods
leopard-x86, boot-args="kmem=1", Libkvm-25(10.4.11)
it's working but not in an easy way indeed hard_maxproc is defined as
__private_extern__ so hidden
kvm_t *kd;
struct nlist nl[] = {
{ "_maxproc" },
{ "" }
};
int main(int argc, char *argv[])
{
int _maxproc;
unsigned long offset;
if (NULL == (kd = kvm_open("/mach_kernel", NULL, NULL, O_RDWR, "kvm_open"))) {
return 1;
}
if (-1 == kvm_nlist(kd, nl)) {
warn("kvm_nlist");
return 1;
}
if ((offset = nl[0].n_value)) {
if (kvm_read(kd, offset, &_maxproc, sizeof _maxproc) < 0) {
warn("kvm_read");
return 1;
}
printf("%i \n" , _maxproc);
}
return 0;
}
, and anyway this test will be apply
if (count > hard_maxproc)
count = hard_maxproc;
so the only way I found it's to calculate the syscall addr and overwrite it
if (kvm_write(kd, addr + offset,
the result is really ugly and hackish, so the cleanest way is to
rebuild the kernel with an hard_maxproc value, maybe I missed a point
here
conclusion, even if mail clients are a bit stupid with IMAP protocol,
I think there is still something to do with the server conf
maybe in tracking dead processes and closing them.
Cheers!
On Fri, Jan 30, 2009 at 12:59 PM, Terry Lambert <email@hidden> wrote:
> On Jan 30, 2009, at 5:29 AM, Lassi A. Tuura wrote:
>>>
>>> If Mail does this then you *definitely* should file a bug. He stated
>>> here, and in his numerous posts on the other mailing lists, that he had only
>>> a few Mac OS X client machines, so I had assumed he was thinking about some
>>> other client.
>>
>> Mail.app definitely spawns large numbers of TCP connections to the IMAP
>> server (dovecot 1.x) for me. On "Go online" I see one IMAP connection per
>> folder (*). Longer term Mail.app appears to keep 5-15 IMAP connections open
>> at any one time. My use appears to generate, from daily stats, on average
>> one connection every 5-10 seconds or so.
>
>
> Again, not the list...
>
> Speaking purely as someone who has implemented mail clients before, I can
> easily see how certain types of object oriented encapsulation of transport
> sessions within mailbox sessions as instances rather than references could
> result in a mailbox:transport cardinality of 1:1 instead of N:1, as a design
> simplification of the object model used within the mail client.
>
> Speaking purely as a former contributor to the Cyrus IMAP server project and
> former technical lead on several commercial IMAP server products, I think
> you should file a bug.
>
> Can we be done now?
>
> -- Terry
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Darwin-kernel mailing list (email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
>
--
-mmw
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden