Re: Approaches to serve multiple clients
Re: Approaches to serve multiple clients
- Subject: Re: Approaches to serve multiple clients
- From: Cameron Kerr <email@hidden>
- Date: Sun, 19 Dec 2004 12:21:02 +1300
On 19/12/2004, at 10:02 AM, Igor Garnov wrote:
I am going to write a daemon, which is supposed to serve many clients.
What kind of server are you writing?
They will connect and probably stay connected for a long time (several
hours). They will exchange data between themselves.
As I understand, there are two main approaches to organize the
workflow:
Actually, there are more, see below...
1) I can fork with each connecting client, and serve each client by a
separate process. Of course, this is easy from the network
programmer's point of view, but taking into account the problem of
transferring data from a client to a client, this also has some
disadvantages. Moreover, this frightens me a little, because if there
are, say, 1000 clients, there also are 1000 processes. This is no
problem for Apache, because the lifetime of each its process is very
short, but what about my situation? Is it OK to have 1000 processes at
the same time? What if there are 2000? 3000?
There are a significant number of times where using fork() is a good
thing. Many (?) OS's (such as Linux, I have no idea about Mac OS
X/FreeBSD etc) implement fork() in a manner that is very efficient (ie,
the memory requirements scale at less than O(N). One such technique is
Copy-on-Write, which is what Linux uses.
One thing you need to consider is if the server's children will need to
communicate or share data structures amongst themselves. Since fork()
provides no IPC mechanism, you would need to use some other IPC
mechanism, or use select() or threads instead.
Another (very important, IMHO) thing to consider is that using fork()
protects you from DoS attacks (whereby crashing the server-child
process doesn't crash the rest of the server handlers, as would be the
case with select or threads). However, with fork(), you may run come up
against the limit of processes you are allowed, and opens up a partial
DoS if you don't manage the number of connections at one time.
Also on the point of security, with select(), it opens up a risk
similar to cross-site-scripting, whereby if there is a vulnerability
where an attacker can write to a shared data-structure, other clients
accessing that data-structure could also be affected.
2) I can use 'select' and have all clients served in the same process.
But as far as I know, there is a limit on the number of sockets that
'select' can watch upon.
Yes, I think POSIX guarantees at least 1024, but I could be wrong.
FD_SETSIZE is 1024 on Mac OS X, and 1024 on Linux also. If this is an
issue, you're probably going to bumping into other limits as well, most
notably the number of open files.
Having written this, I personally think that I should fork a process
for each 100 clients, and use 'select' in this process to catch
network events.
This is the best way to start at least.
I would be really grateful for any ideas on the problem.
I can't give you any more advice until I figure out what exactly is it
that you're designing.
I will say, however, that you should get your hands on a copy of Unix
Network Programming, volume 1. This has all the information you need
about writing Socket based programs, including design issues.
--
Cameron Kerr
email@hidden; http://humbledown.org
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Macnetworkprog mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden