Wednesday, December 3, 2014

Diagonals (Problem solving)

1. Understanding the Problem
In this particular problem, we are presented with a grid of m rows and n columns with a line drawn diagonally through it. We want to know how many grids will have the line pass through it. We want to come up with some formula that we can use to determine the number of squares that the line will pass through given a specific m and n.
The unknowns here are the m and n, so our formula will need to include these as variables.
We are given an example figure of m=4 and n=6. Below, is a recreation of the figure, along with a simpler version of m=2 and n=3.

2. Devising a Plan
First, we should notice that this pattern seems to continue. To be sure, we created an extended version as seen below.
Therefore, we should take into mind this repeating pattern when coming up with a formula. From our simpler figure above, we see that there are 4 squares covered by the line for every 2 rows and 3 columns. This can be extended to say that there are 4 squares for every 2m rows and 3n columns.

3. Carrying out the Plan
A few other things we should consider are:
  • There is 1 of 1 square crossed by the line when m=1 and n=1.
  • There is 2 of 2 squares when m=1 and n=2.
  • There is 3 of 4 squares when m=2 and n=2.
  • There is 4 of 6 squares when m=2 and n=3.
  • There is 4 of 9 squares when m=3 and n=3.
  • There is 5 of 12 squares when m=3 and n=4.
  • There is 5 of 16 squares when m=4 and n=4.
  • There is 7 of 20 squares when m=4 and n=5
  • There is 7 of 25 squares when m=5 and n=5.
  • There is 8 of 30 squares when m=5 and n=6.

From this, we can draw the conclusion that it is possible to have the same number of squares crossed by the diagonally line given different m and n. This proves a bit troublesome. A piece-wise function or a formula that relies on cases will be required. So I'll stop here.


4. Looking back

The Last (Week 12)

As the final week (almost) of this course, I decided to talk about my general reception of this course instead (since I didn't bother trying to pretend I know what diagonalization is).

I entered this course with mostly a negative attitude as people who have taken it before told me it was exclusively math, and actually consisted of no programming at all. On the first day when the professor talked about ambiguous claims, I found it somewhat interesting. As the semester progressed, the topics alternated between more interesting things such as claims using words to more dull things such as proving delta-epsilon claims. Halfway through the term, I found this course to be more straightforward than I had initially thought, especially when compared to a certain math course which shall not be named.

Despite a complete lack of programming, I found this course useful as it helped me in several other course. The most helpful thing I learned during this course was the ability to generate the body of proofs. This was especially useful for a certain math course. As I became better at linking a few things together in order to solve the "thinking" portion of proofs, I became able to see mathematical proofs in a new light. I used to believe that the body of proofs were simply ass-pulls with little to no reasoning behind it, and impossible for someone who had not memorized every formula and theorem in existence. I still think that for my math course, but this course was a bit different. It was much easier to fulfill the thinking aspect and implement the body of the proofs.

I found the logical arithmetic to be useful for writing clearer programs in my other computer science course. I found myself constantly applying or undoing DeMorgan's law for my logical operations of if structures. This post gives a few more points that I left out about this course in general that I agree on.

One thing that I disliked was that when introducing new topics, the Professor seems to assume too much prior knowledge on various aspects of it from the students. While there are many students who seems to be able to follow regardless, others cannot. I do not believe that the students who cannot follow the Professor's introduction are marginally less intelligent but rather had less to no exposure on the assumed knowledge. In my case, it usually takes until the following tutorial for me to understand what was going on. Perhaps it is just that I don't tend to pay attention well to things I don't understand.

On a parting note, I think that the tests in this course are kind of nice.

終わり

Tuesday, December 2, 2014

Halting the Slogs (Week 11)

It's great that there was a shorter week during this week. However, I didn't actually do anything productive over that extended break. I guess relaxing can count as something, after all it's good to alleviate stress once in a while.
During this week, we touched onto the last topic in this course that was noted in Chapter 5 of the course notes. Just like starting any new topics, I immediately get lost and stop paying attention for a short while until I can actually grasp the ideas being discussed. However, with a certain test for a certain course coming up, I found myself neglecting this week's notes completely.
I kind of thought that all things in the universe could be computed given enough knowledge of it and time to implement the algorithm. This week brought to light that there exists things in which no one could ever implement an algorithm for, no matter how knowledgeable they are on the required task and no matter how much time is given. This struck me as rather strange. It is certainly true that you cannot implement everything using Python or perhaps any other programming language, but I wonder if the same can be said about implementing an algorithm by words. If it can be implemented using words, then certainly, one day, it should be possible to implement current day impossible algorithms.
One example given in class on an uncomputable algorithm, is the function halt. This student provides a proof for it. This function takes in a function and some other argument then calls the function on said argument. It will return True if that call halts and it will return False should it not halt. This problem is very apparent since if the call of the function does not halt, meaning never returning control back to the user, we will never get an answer. The fact that the function does not halt means we will never move on to the next line to return False, thus the function cannot be implemented. It may work well for function calls that halts, but it cannot determine when the calls do not halt. As said in class, it is impossible to tell if a bunch of code is taking a really long time to execute or it is caught up in some infinite loop.

Sunday, November 16, 2014

Oh, Omega (Week 10)

This week, we worked with proving the negation of big-Oh, omitted lots of bookends of the structure of our proofs, defined big-Omega, big-Theta and proofs involving general functions.
Proving that a function f is not within big-Oh of a function g is simply proving the negation of the big-Oh statement. So the only really big difference is that we have to assume for all c in the positive real numbers and all B in the natural numbers and we get to choose an x in the natural numbers. Then somehow get n is at least B and that the function f is greater than any c coefficient times the function g.
Something really different this week is that we started omitting most of the bookends. Although those bookends were generally tedious, I have found that they made the proof look more "whole" and now that they are gone, we have a lot more empty space and a sudden conclusion right after we reach the last step of the body. I know that the professor said that this is how proofs are actually written but it still feels uncomfortable looking at it.
Big-Omega is simply establishing a lower bound for a function. As for big-Theta, I have no idea what it is used for. For big-Omega, in the body of its proof, we are instead making a chain of inequalities going the other direction from that of big-Oh. The rest is fortunately the same as big-Oh.
Proving and disproving big-Oh and big-Omega with general functions is similar to with any other functions. Really, the only difference is that we use generic functions that are bounded above or below another function.
For extra insight on Big-Oh and the likes, see this person's post.

Days of Big-Oh (Week 9)

I felt like I had a better understanding of the current material in Week 9. Suddenly, Big-Oh seems so much less complicated once we started proving functions are within big-Oh of another function. It's a lot clearer when we say that a function f is within Big-Oh of another function g means that after some break point along the x-axis of a graph, all y-axis values of a function f will be the same or less than that of the other function g. This is also a great way to see what it means for a function f to be "bounded" under a function g.
I was surprised by how short and clear the example for proving 3n^2 + 2n in big-Oh of n^2 was. When we started with the proof structure, I made sure to leave a lot of space in between before the concluding line. Then the professor simply wrote the body in 3 lines. I can say that I didn't see that coming. Now that I am looking back at my notes, I can see a large gap of white space in the proof which is filled with doodles to take up the rest of the space.
One thing that was not as clear to me however was choosing the B. I can see we simply choose c to be whatever coefficient is in front of the n^2 after we added up all terms in term of n^2. To try to figure this out, I looked over two of the example proofs where B was chosen to equal 0 in one and 1 in the other. I noticed that in the second proof (3n^2 + 2n + 5 in big-Oh of n^2), that there was a commented part saying that n must be at least 1. So that means B must be based on this assumption made during the chain of inequalities. I also noticed that we tend to choose B = 0 when trying to establish an upper bound for functions without a constant. However, when constants are introduced, B cannot just be 0 anymore. This is from only examining three proofs however, so I cannot simply generalize it.
Overall, I found Week 9 is be much more explanatory than its preceding week. The week was shorter because of the test on Wednesday but I still felt like I learned more in the two days than all other days prior that dealt with big-Oh.

Sunday, November 9, 2014

Worst Case Scenario (Week 8)

I have neglected to post anything for a while because I had been preoccupied with studying for multiple tests and as such simply forgot about this.
All throughout week 8, I really had no idea what I was doing anyways, so I wouldn't have had much to say. Generally speaking, I didn't understand how Big O works and what we were even trying to prove. The counting using tick marks beside each line of a search or sort algorithm seemed really strange and I couldn't see the purpose in counting steps of such algorithms. I knew that we wanted to determine the worse case which would be the scenario in which the program takes the most number of steps to execute. In the linear search algorithm, that would be if the item being searched for isn't actually in the list at all. In which case, the program will have to loop through every item in a list L until it reaches the end to confirm that the item was not found.
One of the things that threw me off was the counting of the while loops. It just looked like a bunch of mismatched i's and j's to me. Another thing was proving that the worst case of insertion sort is within Big O of n^2. The only thing that I got from that is the basic proof structure. The thinking section in the middle did not make sense to me. I was simply copying notes down in class. I guess the most confusing part was the listing of the contributions of each line and then suddenly going to some sort of equations of n. The Omega part was strange too.
I decided to let myself naturally learn this concept over time like I always do for things I don't understand right away. So, I left all of this stuff alone until the following tutorial. As my TA went over the tutorial question sheet, I began to felt that it seemed simpler than what we did in lecture. However, that was only because the tutorial questions didn't actually talk about Big O, since it was only on counting steps which seemed simple enough this time around. I realized that each line within a while loop will iterate once per counting variable outside of it, generating steps equal to the number of lines inside of it times the number of iteration plus one for the last checking of the loop before it stops executing.
Fortunately, things became a little clearer in week 9.

Tuesday, October 28, 2014

Disprove (Week 7)

Last week, we learned how to prove a statement false. As it turns out, the simplest way to prove something false is to prove the negation true. Since if the negation is true then the negation of the negation which would be the original statement, has to be false.
If a statement begins with a for all quantifier, we can quickly assess to see if it can be potentially a false statement by looking for a counter example. If the statement is an implication then we would try to think up something that satisfies the antecedent but not the consequence. If there indeed does exist one such case then we can assume the statement is false as the for all quantification is not longer met. Following this, we would negate the statement and proceed to prove the negated statement as we would for any other proofs. In the case of our example with a false for all quantified statement, we would now go forward by proving there a there exists quantification and most likely pick the example as the same one we used to figure out that the original statement was false in the first place.