Re: Where is NSList?
Re: Where is NSList?
- Subject: Re: Where is NSList?
- From: Stephen Checkoway <email@hidden>
- Date: Thu, 29 Jul 2004 12:55:48 -0700 (PDT)
On Thu, 29 Jul 2004, David Blanton wrote:
>
What is the definition of O(1), O(lg n) and Theta(n).
The big-Oh notation is simply an asymptotic upper bound on the running
time (or space) of an algorithm, or in this case, the time it takes to
perform an operation on a data structure. There are really 5 such
functions used. The definitions are as follows:
g(n) = O(f(n)) means that there exists constants c,M>0 such that for
all n > M, g(n) <= c*f(n).
g(n) = Omega(f(n)) means that there exists constants c,M>0 such that
for all n > M, g(n) >= c*f(n).
g(n) = Theta(f(n)) means g(n) = O(f(n)) and g(n) = Omega(f(n)).
Two less used are small omega and small o.
g(n) = o(f(n)) means there exists M > 0 such that for all constants c > 0,
and all n>M, g(n) < c*f(n).
g(n) = omega(f(n)) means there exists M > 0 such that for all constants
c > 0 and all n > M, g(n) < c*f(n).
The common ones are
O(1) - constant time
O(log n) - log time
O(n) - linear time
O(n log n) - log linear I believe
O(n^2) - quadratic
O(n^p) - polynomial time
O(a^n) - exponential time.
There are obviously a lot more such as O(log*n) or the inverse of
Ackerman's (spelling?) function, but those are usually reserved for
specialized analysis of algorithms.
>
(If it helps get an answer I do have a math degree from Berkeley so I will
>
understand the definitions.
In math, you've probably encountered O(), if not explicitly stated in that
form. I know I ran into a few times during my senior year analysis courses
and also in a lower level applied math course.
>
I just don't know the functions O() and Theta() )
As a few examples:
5n^3 - n log n = Theta(n^3)
log(n^2) = Theta(log n)
Theta is a tight bound. In both of those examples I could have said that
they were o(2^n) or O(2^n) or omega(1) or Omega(1).
What many people fail to realize (including some in this thread) is that
this is only supposed to be a rough first estimate since constants are
totally ignored in this analysis but of course they really matter. Let me
give an example:
Quicksort has a worst case bound of Theta(n^2) running time while merge or
heap sort have a worst case bound of Theta(n log n) which is clearly
better. However, both merge and heap sort are slower in practice than
quicksort.
Here's an example related to data structures rather than algorithms that
operate on them. An implementation of a stack (or a queue) using a linked
list has a best, average, and worst case running time of O(1) for push/pop
(enqueue/dequeue). An array based implementation on the other hand only
has an amortized running time of O(1) since pushing too many elements
causes the array to be resized which takes linear time. However, in
practice, it is much faster to use an array and maintain an index (or two
indices in the case of a queue) rather than a pointer to the top (or head
and tail).
If that seems unreasonable (as it did at first to me) then give it a shot
using the stl or write your own classes and time it. The amortized
constant time will beat the linked list nearly every time. The key is you
must increase your array by some constant _multiple_ of the length, not
simply a constant. Doubling the size of the array is usually how I do it.
- Steve
_______________________________________________
cocoa-dev mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/cocoa-dev
Do not post admin requests to the list. They will be ignored.