site_archiver@lists.apple.com Delivered-To: darwin-dev@lists.apple.com On Oct 19, 2004, at 9:41 AM, Chris Kacoroski wrote: Thanks for that information! I will look at xtar... I think this solution is working well: _______________________________________________ Do not post admin requests to the list. They will be ignored. Darwin-dev mailing list (Darwin-dev@lists.apple.com) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/darwin-dev/site_archiver%40lists.appl... I have been testing backups options to linux servers extensively and found that xtar is really the only option. See the following thread from the backuppc list: http://news.gmane.org/gmane.comp.sysutils.backup.backuppc.general/ cutoff=2608 1) For initial bulk backups, I mount a volume on my Linux server via NFS or SAMBA. Then I use "ditto --rsrc" to move the data over. That way, the NFS/SAMBA client on the Mac side takes care of filename mappings and resource forks. I tried to pipe ditto output through ssh as shown in the ditto man page, but for 100 GB transfers this was too fragile. Actually any net connection when moving 100 GB is fraught: sometimes I use external hard disks to save the data from the Macintosh onto EXT2 partitions as pax archives. (If you have a developer release of 10.4.x, ditto has an additional option to prevent out-of-memory errors for large transfers: "--nocache") 2) For subsequent incremental backups, rsync+hfsmode tunneled through ssh. My "ultimate" solution would probably be to patch resource-fork support into the "star" (ess-tar) program; star strives for full POSIX compliance, and is the *only* tar/pax/cpio program that I have run across that does not choke on weird file names or very large files. I use star on my FreeBSD and Linux servers for full and incremental system dumps and the archives are totally portable across different OS types and system architectures. This email sent to site_archiver@lists.apple.com
participants (1)
-
Boyd Waters