Wednesday 2 April 2014

Sorting and Efficiency

First of all, apologies for the white background behind the text. I can't fix this at all. 

Sorting is any process of arranging elements to a specific scheme. Over the recent lectures, students have been learning about sorting algorithms and their implementations within the world of computer science. More specifically, we, the students, have learned about sorting from the least to the most, numerically. Because of the plentiful sorting algorithms, and the even more abundant implementations of these sorting algorithms, the question of "Which is most efficient?" comes into play pretty quickly. Through labs and lectures we have kept track of the time it takes for random elements to be sorted using different sorting methods in attempts to answer this question.

With the question of efficiency, students were taught big O notation. Big O notation is a measurement of runtime complexity in relation to the size of the input of an algorithm. Consider the following possibilities for a big O runtime:

O(c)
O(n)
O(log(n))
O(nlog(n))
O(n1n2)
O(cn)
O(nn)

3 sorting methods, bubble sort, selection sort, and insertion sort, were introduced in CSC108. Although these algorithms work, they are unsatisfactorily inefficient. Efficiency, being a key goal in the field of computer science, must then be found in other algorithms, namely merge sort, quick sort, and tim sort.

Bubble Sort
Assume x to be an unsorted list. Bubble sort, starting at index i, compares adjacent elements. If the earlier element (x[i]) is larger than the later element (x[i+1]), swap them accordingly. This continues until len(x) -1 == i. Thus, one "pass" has completed. The last element (x[-1]) is now the largest element in the list. More passes are necessary to position all values appropriately. Although it is simple, this sorting algorithm performs incredibly poor in comparison as it has a runtime complexity, at worst, O(n2).

Example
x = [27, 3, 5, 4, 12, 9]

First Pass
[3, 27, 5, 4, 12, 9]
[3, 5, 27, 4, 12, 9]
[3, 5, 4, 27, 12, 9]
[3, 5, 4, 12, 27, 9]
[3, 5, 4, 12, 9, 27]

As you can see, 27 has been continuously compared with its adjacent element and has moved to the end of the list from the very beginning through one pass. 

Insertion Sort
Assume x to be an unsorted list. Insertion sort separates x into an unsorted list and a sorted list; by the end of the process only the sorted list remains. It does this by comparing the element at index i with the element at index i+1. If the order is wrong, swap them. From this point on, if the element at i+1 is larger than the element at i, insert it in the appropriate position. At worst, the runtime complexity of insertion sort is O(n2), just as is the worst case runtime for bubble sort.

Example
x = [27, 3, 5, 4, 12, 9]

[27, 3, 5, 4, 12, 9]
[3, 27, 5, 4, 12, 9]
[3, 5, 27, 4, 12, 9]
[3, 4, 5, 27, 12, 9]
[3, 4, 5, 12, 27, 9]
[3, 4, 5, 9, 12, 27]

Selection Sort
Assume x to be an unsorted list. Similar to insertion sort, selection sort makes use of both an unsorted and sorted sublist within x. The sorted sublist is initially 0. Selection sort will place the smallest element from the unsorted sublist into the sorted sublist and so on until all that remains is the sorted sublist. At worst, the runtime complexity of selection sort is O(n2). as the previous two algorithms had. However, it is interesting to also note that at best, and on average, selection sort performs the same as its worst case scenario. 

Example
x = [27, 3, 5, 4, 12, 9]

[27, 3, 5, 4, 12, 9]
[3, 27, 5, 4, 12, 9]
[3, 4, 27, 5, 12, 9]
[3, 4, 5, 27, 12, 9]
[3, 4, 5, 9, 27, 12]
[3, 4, 5, 9, 12, 27]

Merge Sort
Merge sort works by dividing x into two parts, and those parts into two parts, and so on and sorting at the smallest units and then putting everything back together. At worst, the runtime complexity of merge sort is O(nlog(n)).

Quick Sort
Quick sort works by choosing an arbitrary "pivot" and placing all larger list elements ahead of the pivot and all smaller list elements behind the pivot. Recursively it does this to the smaller sublists ahead and behind the pivot, and so on, until the entire list is sorted. At worst, the runtime complexity of quick sort is O(n²).


Example
x = [27, 3, 5, 4, 12, 9]

[27, 3, 5, 4, 12, 9]  27 will be the pivot
[3, 5, 4, 12, 9, 27] 3 will be the pivot
[3, 5, 4, 12, 9, 27] 5 will be the pivot
[3, 4, 5, 12, 9, 27] 12 will be the pivot
[3, 4, 5, 9, 12, 27]

Tim Sort
This sorting algorithm has been Python's standard sort for quite some time now. Utilizing other sorting algorithms, such as insertion sort, tim sort is able to efficiently sort lists of many sizes.

I feel this chapter of the CSC148 curriculum is frustrating and, although I understand it's importance, completely uninteresting. 



Monday 10 March 2014

Trees Suck

You heard it here first folks, yes it is in fact true: Trees suck. They're difficult to manipulate, and such difficulty is carried on through to the second assignment and the third exercise, sadly.

The reason for it being called a tree is it being a metaphor for several things. Generally, you begin at the root with a value, it and carries on through left and right "sub-trees" which also contain their own left and right sub-trees until a NoneType node is reached, to finally end the tree. Trees and their implementations can vary extremely. Below is an example of a Tree object's initialization given to us in the readings.

class Tree:
    def __init__(self, cargo, left=None, right=None):
        self.cargo = cargo
        self.left = left
        self.right = right

Cargo can be of any type, the parameter is unspecific for this reason. For example, a Tree cargo could be an integer like so: Tree(3). It could also be a string, Tree('x'). As you may notice, there is no need to specify a left or right sub-tree. By default, they are set to None. The above examples would be shown like so in the Python shell.

>>> int_example = Tree(3)
>>> str_example = Tree('Hey')
>>> int_example.cargo
3
>>> str_example.cargo
'Hey'
>>> int_example.left
>>> int_example.right
>>> str_example.left
>>> str_example.right
 
Why use a tree in the first place, you may ask? Well, they're a very convenient way to store information. Traversing through a tree in a particular manner can return a result most helpful to the user. You could, for example, print a tree preorder, where the contents of the root appear before the contents of the children. In this case, the contents of the left tree is printed before the contents of the right tree. So the case prints from root, to left tree, to right tree. 

def print_tree_preorder(tree):
    if tree is None: return
    print(tree.cargo, end=" ")
    print_tree_preorder(tree.left)
    print_tree_preorder(tree.right)

>>> tree = Tree("+", Tree(1), Tree("*", Tree(2), Tree(3)))
>>> print_tree_preorder(tree)
+ 1 * 2 3 

Voila! Now if anyone wanted a preorder representation of their expression tree (yes, trees can be manipulated to deal with various expressions in various ways!)

I think they can be cool, for lack of a better word, but understanding how to use them the way I want to use them for A2 and E3 is killing me. I also regret to say that I don't know if I'll be able to figure E3 out in time. Aside from it being completely confusing to me, I do not have much time to focus on CSC148 this week. So cheers, until next time. 


Friday 21 February 2014

Recursion

Recursion, simply stated, is the use of something within itself. At face value, this definition can make recursion seem either simple, or daunting; however, in either case, recursion proves very powerful. At one point in a recursive function, the function will call itself to reach a certain goal. Functions using concepts such as for loops can be rewritten as recursive functions. The following example will demonstrate this.

def repeater(a: str, b: int):
    """ Return a string containing string a repeated integer b times
    
    >>> repeater('hey', 3)
    'heyheyhey'
    >>> repreater('S', 5)
    'SSSSS'
    """
    new_string = ""
    
    for i in range(b):
        new_string += a
        
    return new_string

How This Works: a is accumulated b times in a new string through iterations of a for loop. 

def repeater_recursive(a: str, b: int):
    """ Return a string containing string a printed integer b times
    
    >>> repeater_recursive('hey', 3)
    'heyheyhey'
    >>> repeater_recursive('S', 5)
    'SSSSS'
    """
    if b == 0:
        return ""
    else:
        return a + repeater_recursive(a, b - 1)

How This Works: Consider the if statement. If parameter b is 0, a is to repeat 0 times in a new string, which is why an empty string is returned. Otherwise, continue to the else statement. This is where the recursive fun begins. "return a" gets us started with the first accumulation of a in the new string. To accumulate according to b, repeater_recursive() calls itself. The second parameter of repeater_recursive() in the body of the else statement is 1 less than it was at first. This is because we already have a single a accumulated from "return a". Once it calls itself, it will again go through the if-else statement, and apply itself accordingly. The second parameter in the repeater_recursive() call in the else statement will be 0 once no more a's need to accumulate into the returning string. This will at last return an empty string, as previously mentioned, and the function will end.

During this week I'll be focused on beginning Stage 1 of A2 while studying for the midterm. I think it's good to see a Stage 1 of A2 due at a time prior to the next. I typically wait too late to begin an assignment, as I did with A1, and this causes me to fail in many aspects. This time around I'm determined to overcome this issue of mine. So stay tuned, CSC148 TAs, for you haven't heard the last of me.


Sunday 26 January 2014

Inheritance, Exceptions, & The Whole 9 Yards

During this past week's lecture we discussed two important programming concepts, their practical uses, and finally their implementations. These two concepts would be inheritance and exceptions. In addition to this, we were given our very first assignment (hurrah) and the second exercise which, from what I've glanced at, is about the effective use of both these concepts in one swift blow.

I personally believe the concepts of both are fairly simple, however just looking at the instructions for the second exercise I'm a little worried because I don't feel the lecture prepared me enough to even write my own exceptions. Plus under the amount of work I'm given from other courses I haven't gotten around to doing the readings. I'll definitely try squeezing those in, maybe during a commute, and check out some tutorials if need be.

In regards to the assignment, I'm kind of intimidated but I hope after thoroughly reading through the handout a couple times and asking questions I'll be comfortable enough to ace it.

I'm pretty preoccupied so sorry for the brief nature of this post. In other exciting news, Her was great. I miss the winter break.


Monday 20 January 2014

Object-Oriented Programming

As my first exciting post on this bedazzling so-called "SLOG", or courSe LOG (I personally I don't understand why S of all letters was chosen over C, O, U, R, or E), I'll be introducing myself and discussing my rather limited experience with object-oriented programming.

For starters, my name is Omar & I'm a student enrolled in CSC148 (the latter is obvious enough if you're even reading this, so). Well then, enough with the introductions, let's get to the juicy and tender meat of my SLOG. Something about that sentence humours me.

The prerequisite to CSC148, CSC108, introduced numerous fundamental programming concepts such as if statements, various loops, lists, dictionaries, and so on and so forth; all essential to the novice programmer. The inverted format and perfect pacing of CSC108 coupled with my previous, albeit also limited, experience in Java made it a really great experience, which  somewhat worries me because this course seems like it could be infinitely more difficult and less enjoyable. Near the end of semester CSC108 briefly touched upon object-oriented programming, and it was the perfect segue into CSC148 which apparently covers most, if not all, of the fun of object-oriented programming along with other new concepts such as recursion, exceptions, and ADTs (Abstract Data Types).

The basic idea of object-oriented programming makes enough logical sense that I can see why it's been created in the first place and why it has become such an important part of programming. I like that programmers are able to create objects which relate to real world concepts and ideas. For example, in class and through readings a class for points on a graph was written and its implementations demonstrated. That would have saved me a lot of time in grade 9 mathematics, but so would've paying attention and doing homework.

So hopefully I survive the semester, and I'm definitely excited to see what mind numbing trickery Assignment 1 has in store for me and my peers. That was sarcasm by the way, I'm not excited whatsoever. But yeah, that's all folks!