First of all, apologies for the white background behind the text. I can't fix this at all.
Sorting is any process of arranging elements to a specific scheme. Over the recent lectures, students have been learning about sorting algorithms and their implementations within the world of computer science. More specifically, we, the students, have learned about sorting from the least to the most, numerically. Because of the plentiful sorting algorithms, and the even more abundant implementations of these sorting algorithms, the question of "Which is most efficient?" comes into play pretty quickly. Through labs and lectures we have kept track of the time it takes for random elements to be sorted using different sorting methods in attempts to answer this question.
Sorting is any process of arranging elements to a specific scheme. Over the recent lectures, students have been learning about sorting algorithms and their implementations within the world of computer science. More specifically, we, the students, have learned about sorting from the least to the most, numerically. Because of the plentiful sorting algorithms, and the even more abundant implementations of these sorting algorithms, the question of "Which is most efficient?" comes into play pretty quickly. Through labs and lectures we have kept track of the time it takes for random elements to be sorted using different sorting methods in attempts to answer this question.
With the question of efficiency, students were taught big O notation. Big O notation is a measurement of runtime complexity in relation to the size of the input of an algorithm. Consider the following possibilities for a big O runtime:
O(c)
O(n)
O(log(n))
O(nlog(n))
O(n1n2 )
O(cn )
O(nn )
3 sorting methods, bubble sort, selection sort, and insertion sort, were introduced in CSC108. Although these algorithms work, they are unsatisfactorily inefficient. Efficiency, being a key goal in the field of computer science, must then be found in other algorithms, namely merge sort, quick sort, and tim sort.
Bubble Sort
Assume x to be an unsorted list. Bubble sort,
starting at index i, compares adjacent elements. If the earlier element (x[i])
is larger than the later element (x[i+1]), swap them accordingly. This
continues until len(x) -1 == i. Thus, one "pass" has completed. The
last element (x[-1]) is now the largest element in the list. More passes are
necessary to position all values appropriately. Although it is simple, this
sorting algorithm performs incredibly poor in comparison as it has a runtime
complexity, at worst, O(n2 ).
Example
x = [27, 3, 5, 4, 12, 9]
First Pass
[3, 27, 5, 4, 12, 9]
[3, 5, 27, 4, 12, 9]
[3, 5, 4, 27, 12, 9]
[3, 5, 4, 12, 27, 9]
[3, 5, 4, 12, 9, 27]
As you can see, 27 has been continuously compared with its adjacent element and has moved to the end of the list from the very beginning through one pass.
Insertion Sort
Assume x to be an unsorted list. Insertion sort
separates x into an unsorted list and a sorted list; by the end of the process
only the sorted list remains. It does this by comparing the element at index i
with the element at index i+1. If the order is wrong, swap them. From this
point on, if the element at i+1 is larger than the element at i, insert it in the
appropriate position. At worst, the runtime complexity of insertion sort
is O(n2 ), just as is the worst case runtime for
bubble sort.
Example
x = [27, 3, 5, 4, 12, 9]
[27, 3, 5, 4, 12, 9]
[3, 27, 5, 4, 12, 9]
[3, 5, 27, 4, 12, 9]
[3, 4, 5, 27, 12, 9]
[3, 4, 5, 12, 27, 9]
[3, 4, 5, 9, 12, 27]
Selection Sort
Assume x to be an unsorted list. Similar
to insertion sort, selection sort makes use of both an unsorted and sorted
sublist within x. The sorted sublist is initially 0. Selection sort will place
the smallest element from the unsorted sublist into the sorted sublist and so
on until all that remains is the sorted sublist. At worst, the runtime
complexity of selection sort is O(n2 ). as the
previous two algorithms had. However, it is interesting to also note that at
best, and on average, selection sort performs the same as its worst case
scenario.
Example
x = [27, 3, 5, 4, 12, 9]
[27, 3, 5, 4, 12, 9]
[3, 27, 5, 4, 12, 9]
[3, 4, 27, 5, 12, 9]
[3, 4, 5, 27, 12, 9]
[3, 4, 5, 9, 27, 12]
[3, 4, 5, 9, 12, 27]
Merge Sort
Merge sort works by dividing x into two parts, and those parts into two parts, and so on and sorting at the smallest units and then putting everything back together. At worst, the runtime complexity of merge sort
is O(nlog(n)).
Quick Sort
Quick sort works by choosing an arbitrary "pivot" and placing all larger list elements ahead of the pivot and all smaller list elements behind the pivot. Recursively it does this to the smaller sublists ahead and behind the pivot, and so on, until the entire list is sorted. At worst, the runtime complexity of quick sort
is O(n²).
Example
x = [27, 3, 5, 4, 12, 9]
[27, 3, 5, 4, 12, 9] 27 will be the pivot
[3, 5, 4, 12, 9, 27] 3 will be the pivot
[3, 5, 4, 12, 9, 27] 5 will be the pivot
[3, 4, 5, 12, 9, 27] 12 will be the pivot
[3, 4, 5, 9, 12, 27]
[27, 3, 5, 4, 12, 9] 27 will be the pivot
[3, 5, 4, 12, 9, 27] 3 will be the pivot
[3, 5, 4, 12, 9, 27] 5 will be the pivot
[3, 4, 5, 12, 9, 27] 12 will be the pivot
[3, 4, 5, 9, 12, 27]
Tim Sort
This sorting algorithm has been Python's
standard sort for quite some time now. Utilizing other sorting
algorithms, such as insertion sort, tim sort is able to efficiently sort
lists of many sizes.
I feel this chapter of the CSC148 curriculum is
frustrating and, although I understand it's importance, completely
uninteresting.