10 jun 2006 kl. 11.53 skrev Richard Rönnbäck:
I am trying to find the fastest possible way of getting information for specific file types and turn it into a tab separated text file for import to a database.
Fastest to write or fastest to execute?
The resulting files may then look like this:
4136284 1120 -rw-r--r-- 1 richardr wheel 527927 May 1 17:53 /Users/richardr/Desktop/LOGOTYPE/rr_logo_100x100 endast symbol.psd
1746034 1160 -rwx------ 1 richardr richardr 593360 Mar 26 2001 /Users/richardr/Desktop/Städa/Indesign/Bakgrunder/åker.psd
You are taking the approach to get all info you want, and the parse it for your output.
One may change that view, by putting nothing more than the absolut path of he files,
one one each row, int he result file.
From there, it is perhaps easier to run through the resultfile, knowing that you could
'stat' the file, either from command 'stat', or from asking Fider + 'System Events' via Applescript,
or by using stat in Bash/TCL/perl/whatever.
Of course, it may be good to use POSIX path, or surround each line with '"' to get passed
filenames with spaces in them
Files with charchters like äåö in them has to be POSIX path escaped.
So I'd
1 - make the list of just filenames.
2 - read that file line by line, and asks finder for the info, and outputting from applescript to another file
or
1 - make the list of just filenames.
2 - read that file line by line, and make anothe file with the POSIX path of the filenames
3 - run a shell tool like stat on the filenames to your resultfile
This all assumes 'fastest to write' as in a one shot thing