Grin. Yeah, but find 28,000, of them in a 30,000 character string, and replace with another character in the same position in a mail message, and color it, and you're looking at .093 x 20,000 = 1/2 hour plus (or am I missing something in your suggestion? Can you post the code?).
My way drops that to a few seconds, vanilla. (which is what the client wanted).
In Alex's case, he's not looking for a defined string, merely getting every 4th field in every row. I think his bottleneck is reading the nth part of his reference string 'MyTable' when n gets large. If he wants to use vanilla, then the only way I can see is to split the reference string into chunks so n never becomes too large.
It's an interesting dilemma.
Regards
Santa
On 16/11/2010, at 3:31 PM, Christopher Stone wrote: I just tried splitting a 450K character document with Text Item Delimiters and than getting the length of the pieces with a +1 offset. 0.093 Seconds to do the split, get the length of the pieces, and return the 1st "•" as a test. I'm not sure if this methodology will work for you, but I'd do damn near anything to not iterate through characters - even learn some Perl.
And what, you ask, was the beginning of it all? And it is this...... Existence that multiplied itself For sheer delight of being And plunged with numberless trillions of forms So that it might find itself innumerably
Sri Aurobindo
|