Hi Jens,
Normally, I would agree. Fortunately, this device isn’t going through a router. Its is a point to point connection, with nothing in between. (I didn’t build the device, either. :) The protocol is what it is.
Its a comment I hear, frequently.
Enjoy what we may learn! (Its a heck of a way to cut one's my teeth on “real time” network processing.)
bob.
On Feb 13, 2014, at 6:02 AM, Robert Monaghan < email@hidden> wrote: I am working on a software package that reads custom network frames from a hardware device. This device works over 10GBaseT and can saturate the network pretty easily. As the data is image data, all of the data is needed. Dropped packets during a transfer, means that the entire image has to be re-downloaded.
[Somewhat off-topic; I don't have the low-level POSIX/BSD chops to answer your specific questions]
The above requirements seem contradictory to me, as Ethernet is explicitly a lossy medium. Any packet-level network protocol has to work with the assumption that packets can and will get dropped. There can be collisions on the wire; the switch/router is free to drop packets if it can't keep up, as is your local network interface, etc.
These days packet losses across one hop on an Ethernet LAN are pretty low. But they're nonzero, and if gigabits-per-second of data are being sent then I'd imagine you'd find dropped packets pretty regularly.
This may not be your fault if you're not the developer of the hardware device, but it seems like someone screwed up in designing its network protocol.
—Jens
|