Re: Transferring large files reliably
Re: Transferring large files reliably
- Subject: Re: Transferring large files reliably
- From: Jens Alfke <email@hidden>
- Date: Sat, 08 Oct 2016 11:35:05 -0700
On Oct 7, 2016, at 12:13 PM, Carl Hoefs < email@hidden> wrote:
My iOS 9.3 app uploads large (100MB) video files to a backend OS X server for processing. If I use write(2) on a socket from a background thread, it works but I get no feedback on the progress of the upload until it's completed.
Don’t send all the data in a single write() call. Send something like 100kb at a time instead, and when each write completes, increment your progress state and start the next write.
As a bonus, this doesn’t require that you have all 100MB in memory at once; you can read each chunk from disk as you need it. (For optimal performance you can read from disk on a second thread, into a second memory buffer. It’s a simple producer/consumer task.) And if the transfer gets interrupted, it must restart from the beginning.
To fix that you need to use a protocol that can handle restarts or partial writes, as HTTP can. It sounds like your server uses a minimal protocol where you just open a socket and write the body of the file and close the socket; if you can’t change the server, you won’t be able to support resumed uploads.
—Jens |
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Macnetworkprog mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden