Thursday, October 30, 2008

week.eight

on monday, we discussed the topic of 'named repetition'. essentially, if a function has no loop/recursion, we can trace the sequence of statements one by one and prove that it is correct by stating the return value. otherwise, if the function contains a loop, then there exists a loop invariant that works as an indicator of the loop's iterations and it is true for every iteration of the loop. there are two things to prove when proving a program with loop(s): partial correctness and termination. we applied this to proving a program on power/exponents as an example. as it turns out, a loop invariant helps to prove partial correctness, but proving loop invariant itself may not depend on the postcondition (usually precondition, though). in the end, a correct iterative/recursive program means that the precondition implies the postcondition.

on wednesday, we continued on with our pow program. partial correctness is pretty much works like proving by simple induction (using the loop invariant), and to prove that the loop (and consequently the program) terminates is to find a (strictly) decreasing sequence such that i is the loop index and prove that E_(i+1) < E_i. what i thought the hardest part in proving an iterative program is finding a suitable loop invariant, because they could be anything and one is better than the other - the challenge is finding one that is the best. one thing for sure that i find is that the expression(s) in the loop invariant will involve some (if not all) the program's variable - but it is also important that we don't use these variables explicitly in the induction, rather set the variale, say v, as v_i, where v_i is the value of v in the ith iteration.

on friday, i was late =D. i shall refer to the course slide to catch up (hopefully).

Thursday, October 23, 2008

week.seven

reminder: problem set #4 due on friday!

on monday, prof. heap was m.i.a. - something to do with heffalumps? but we kick-started on a new, familiar (from 165) topic: program correctness. basically, a program consists of two things: a precondition ("correct input"; the statement that expresses what the correct input is) and a postcondition (symmetrically the "correct output"). i think in this course we focus our attention on programs that involve some sort of iterative conventions (recursion and loops) - i think it'd be fun discussing what constitutes a loop/recursion and how they really behave. we started by going through the codes for a recursive binary seach. in essence, to prove that the program for recBS works is to show that the precondition: 1 <= index of first element <= index of the last element <= the length of the array and the array is sorted, implies the postcondition: return index i such that first index <= i <= last element and that the element of the array at index i is the element sought after - if such i exists, return 0 or an error message otherwise. and the flavour of induction of use is the pci.

on wednesday, when danny came back, he went over the proof of correctness of recursive binary search. unfortunately, i was late for class - just in time for a new topic to talk about (sigh). we also proved the greatest common divisor using the remainder lemma. when i looked at the whole proof on the lecture slides at first, it was pretty intimidating because there were cases to be taken of and it just looked too complex for my frail proving brain. i might have to do some readings off of the course notes to better understand it.


on friday, we continued on with proving the greatest common divisor program with parameter n and m. more specifically, we proved the following claim:
P(m): \forall n \in N, n > m \implies egcd(n, m) returns gcd(n, m)
Claim: \forall m \in N, P(m)
We carried out the poof using simple induction on m. what was important to me the most from this lecture is the method of using proven results (lemmas) to carry out a proof that (may) have some commonality with the lemma(s) so the proof becomes shorter and easier to read (i suppose).

Saturday, October 18, 2008

week.six

reminder: problem set #3 due on friday!

on monday, i ate turkey?

on wednesday, we touched on a recursive sorting algorithm - mergeSort. in particular, we were interested in determining the T(n) of mergeSort. we achieved this by the technique of "unwinding" .through expanding/unwinding T(2^k), we were able to obtain the gernal expression:
2^k + kd2^k = n + dn(lg(n))
moreover, we proved the claim (using PSI):
\forall k \in N, T(2^k) = 2^k + kd2^k = P(k)
also, we touched on the topic of "non-powers of 2" (floor & ceil in between powers of 2). we also wanted to prove that non-powers of 2 are monotonic in relation to each other. the flavour induction was PCI. overall, i didn't have too much of a problem understanding the materials discussed in this class - just might take reading it over to understand it more.

on friday, yet again i was late - must've been because of the week after thanksgiving, so law of inertia still applied to my physique.but, this time it wasn't because i slept it - it's because of the doggone 207 A1. consequently, i did not hand in problem set #3 because i completely forgotten about it. down with procrastination from now on...i hope. we got our tests back at the end of class - i didn't do so bad (considering i was 25 minutes late =P). in short, the class talked about the master theorem of time complexity of mergeSort. i will definitely go through the class slides...

Thursday, October 9, 2008

week.five

reminder: test on friday!

on monday, we proved DIY in closed forms, H(n). in the case where n > 1, we find the quadratic equation first, then the roots. by doing so, we came up with h_0 = 1 and h_1 = 2 (derived from h^2 = 3h - 2). plugging in these values into H(n): (note: hi_j = h superscript i, subscript j)
H(0) = 0 = \alpha.h0_0 +\beta.h0_1 => \alpha = -\beta (*)
H(1) = 2 =\alpha.h1_0 + \beta.h1_1 => \alpha + 2.\beta = \alpha - 2.\alpha = -\alpha (by (*))
from this we determined \alpha = -2 and \beta = 2 (this was checked into H(n), and proven true). next, we proved that if two tree, T_1 and T_2, are disjoints (i.e.: share no node(s) in common), then a new tree can be formed by placing new root, r, on top of the two trees - denoted: (r, T_1, T_2). My thought on this was "oh, no...trees...this makes me wanna be not environmentally-friendly".

on wednesday, we started a sort of corollary flavour of induction - structural induction. in this kind of proof, we are trying to proof if a certain structure, say tree T, can be defined as a predicate of a statement. in class, we proved the claim: (note: T_b = binary tree)
\forall T \in T_b, |nodes(T)| <= 2^(h(T) + 1) - 1 = P(T)
for the empty tree, \lambda, P(\lambda) was shown to be true. through algebra and some known facts, we were able to carry out the induction step to prove that P(T) holds for all T. we also structurally induced the claim about the case of "matched parentheses": (note: MP = matched parentheses, s = string, L(s)/R(s) = number of left/right parentheses)
\forall s \in NP, every initial segment s' of s has L(s) >= R(s).
i personally had a really hard time figuring out and making out of the proof itself - mainly because it was unclear whether a parenthesis is a substring of some string.

on friday, we did the term test #1. i was 25 minutes late =|. but i think i managed to get some decent grades. let's just see what happens...

Thursday, October 2, 2008

week.four

reminder: term test #1 on october 10, 2008!

so a1 was due today. and like my other assignments i've had in 165, i've probably flunked it =(. it was pretty difficult having to derive some proofs for the claims in that assignment. i should start utilizing office hours for the sake of passing second year with flying colours.

on monday, we touched on familiar subject in mathematics: the fibonacci sequence. more specifically, we proved the claim P(n): the sum of the first fibonacci nums is F(n + 2) - 1. the flavour of induction chosen to prove this claim is PCI. the proof is broken down into two cases: n <>= 2. through mathematical manipulation, we were able to carry out this proof by nothing also that F(n + 2) = F(n) + F(n + 1) (by definition of fib sequence). i am never too fond of the fibonacci sequence in general, but seeing that it's been brought up myriad of times in computer science and mathematics courses alike, i guess i have to learn to, if any, love it.

on wednesday, we continued a little bit with fibonacci summation. but more specifically, today's lesson was on time complexity. and we applied this topic to the case of "zero-pair free binary string" (i.e.: the case where there exists no pair of 0's in the binary string). after doing some test cases for this case's time complexity (0 <= n <= 3), we discovered that the recursive nature of the function zpf(n) implies that its time complexity function, T(n), would also be recursive. furthermore, T(n) has the same behaviour as that of a fibonacci sequence. seeing how time complexities could also be recursive caught my attention because in 165, time complexities were calculated by steps - how much work every line/loop does determine it's T(n). also, the process of unwinding is something new that was introduced. it will take practice to get comfortable with unwinding for me because it's a long process to "unwind" and the deriving a general expression for it - it gets tedious and i get lost easily in the midst of it.

on friday, quite honestly, i arrived to class yet (TGIF too early...), but i was able to come in when prof heap was unwinding the T(n) of recBinSearch(), which, in short, looks awfully similar to Fib(n). i'll be sure to check out the slides to go over it again...