complexity theory - Asymptotic runtime for an algorithm -


i've decided try , problem analyzing worst possible runtime of algorithm , gain practice. since i'm beginner need in expressing answer in right way. came accros problem in book uses following algorithm:

input: set of n points (x1, y1), . . . , (xn, yn) n ≥ 2. output: squared distance of closest pair of points. closepoints

1. if n = 2 return (x1 − x2)^2 + (y1 − y2)^2 2. else 3. d ← 0 4. ← 1 n − 1 5.   j ← + 1 n 6.     t ← (xi − xj)^2 + (yi − yj)^2 7.     if t < d 8.       d ← t 9. return d 

my question how can offer proof t(n) = o(n^2),t(n) = Ω(n^2) , t (n) = Θ(n^2)?,where t(n) represents worst possible runtime. know f o(g), if , if there n0 ∈ n , c > 0 in r such n ≥ n0 have f(n) ≤ cg(n). , f Ω(g) if there n0 ∈ n , c > 0 in r such n ≥ n0 have f(n) ≥ cg(n).

now know algoritm doing c * n(n - 1) iterations, yielding t(n)=c*n^2 - c*n. first 3 lines executed o(1) times line 4 loops n - 1 iterations o(n) . line 5 loops n - iterations o(n) .does each line of inner loop's content (lines 6-7) takes (n-1)(n-i) or o(1)?and why?the variation how many times 8.(d ← t) performed must lower or equal o(n^2).

so,how should write , complete proof t(n) = o(n^2),t(n) = Ω(n^2) , t (n) = Θ(n^2)? in advance

count number of times t changes value. since changing t innermost operation performed, finding how many times happens allow find complexity of entire algorithm.

i = 1 => j runs n - 1 times (t changes value n - 1 times) = 2 => j runs n - 2 times ... = n - 1 => j runs 1 time 

so number of times t changes 1 + 2 + ... + n - 1. sum equal n(n - 1) / 2. dominated 0.5 * n^2.

now find appropriate constants , can prove Ω(n^2), o(n^2), Θ(n^2).


Comments

Popular posts from this blog

linux - Mailx and Gmail nss config dir -

c# - Is it possible to remove an existing registration from Autofac container builder? -

php - Mysql PK and FK char(36) vs int(10) -