Saturday 29 March 2014

Week 11: Sorting and Efficiency

     In programming we are always looking for the most efficient way to execute a program. Simply by discarding other variables such as hardware, programming language and other random events, we would like to analyze how a specific algorithim performs given the size of the input. If we define the size of the input as n, and our algorithm as f(n), this will run no more than cg(n) (for some constant c) which we will call O(g(n)). This notation known as big-oh gives an upper-bound that explains the algorithm running time at most depending on the size of the input . It is also important to note that because we are looking at how an algorithm scales depending on n, a relatively large input will allow us to ignore the constants. For example, one program that runs in 5n^2 + 4^n + 3, we will call that O(n^2). Different algorithms have different runtimes which in turn can be categorized by it function's growth. A slower growth (which is good) means that an algorithm will take less additional time given by input n. A constant runtime will complete its task regardless of how large the input is, while a exponential runtime increases horrendously fast (which is bad).

   By the establishing a good knowledge of big-oh, we can determine how different sorting algorithms perform against each other. In the case of this class, sorting deals with unique integers in a list and ordering them from smallest to largest. I will be analyzing quick sort, merge source and insertion sort which are all common sorting algorithims used today.

Quick Sort
      The initial step of quick sort is to retrieve a pivot that enables the list to have integers that have smaller values than the pivot on the left and integers that have larger values on the right of the list. Those two smaller partitions will then have their own pivots and the process repeats itself until each partition becomes a single integer element. If all goes correctly and a proper pivot is selected, the average case run time will be O(nlogn). This can be justified because it takes n times to sort each partitioned list and it takes logn to split the list into smaller sizes (divide and conquer idea).

Merge Sort
     Merge sort can be thought of very similarly to how quick sort functions. However, each element in the list will be thought of as their own sorted list. We merge two pairs of list together by checking each index of the list in contrast to the other. This is done until a single list is created after merging recursive pairs of the lists. Similarly, merge sort also has O(nlogn) behaviour.

Insertion Sort
     In insertion sort we treat the list as having a sorted section on left and unsorted section on the right. The first element of the list will be considered sorted and we will iterate through the list and compare the unsorted elements to the sorted elements. If the unsorted element is larger than the sorted element at the end of the list, we will leave that element to be at that position. However if it is smaller, we will check each element of the sorted section to see where the unsorted elements fits in. Once the sorting process has been completed, the sorted section of the list will be the length of the total list moving from left to right. The average case run time of insertion sort is O(n^2) because it iterates through the list of unsorted elements and iterating through the sorted portion of the list placing the unsorted element. If the list is already sorted, the time complexity of insertion sort will be O(n) because all elements are already in it's proper place.

Monday 10 March 2014

Week 8: Linked List and Assignment 2

This week we focused more on LinkedList and the general functionalities of the wrapper classes. In our new LinkedList class, we implemented a new LListNode class that allows for better implementation of nodes. By doing so, it made it easier for us to delete, reverse and insert new values for our LinkedList by using the attributes of LListNode. Assignment 2 part one was also extended to Sunday, but luckily I had already finished before the due date. The first part of A2 was pretty straight forward going over the topics of classes and inheritance.

Friday 28 February 2014

Week 7: More Recursion!

     In this week's slog post I'll be talking a bit more in depth of the topic of recursion. Like how I mentioned in the previous blog post, recursion is a tool that allows functions to call itself in order to solve smaller subproblems. It must contain a base case in order to make it recursive or else it would recursively call itself infinitely making the program crash. Similarly, recursion can be thought closely to how loops behave in programming. Where loops may iterate through a list or may loop itself as long as the boolean condition is true, recursion takes a similar approach of checking where a circumstance is met and then recursively call when available. If we were to use the built in sum function in Python, we are only allowed to take the sum of integers in a flat list. However with the assistance of recursion, it is possible to create a way get the sum of integers where the list of elements contains integers or even nested list of integers. Thinking back to how recursion works, if we were to have a for loop that iterates each element in the base list and check whether or not the type of element is a list, we can append all items that are not in a list (in our case integers) to a new list. However if the element is a list it will recursively call itself again but this time with the sublist element as it's argument returning us the sum of that list. Moreover because of the powerful properties of recursion, no matter how deep a list maybe, it will still function properly.

     The applications of recursion can especially be essential to data structures such as linked list and trees. In both linked list and trees, they both depend on the fundamental uses of nodes and values. A node is an element in a list that refers to another object in a list, possibly values or even to other nodes. We can visualize this concept like a chain where one node refers to another until there is no possible node that refers to anything afterwards. Furthermore with the implementations of trees we may have two or more possible values that a node refers to and so forth. This makes recursion necessary to process such data because essentially linked list and trees may have nested objects of the same type. If we were to do a tree traversal (visiting each node of a tree) recursion works because it allows us to check for each children of the node (value) and whether or not it contains anything. If it does contain a value we can print that element or append it to a list or if it is a tree object this indicates that we can further go into that node and repeat the same process. 


Tuesday 4 February 2014

Late Week 4: Recursion

     In week 4 we focused on the powerful concept of recursion. Recursion allows us to solve larger problems by breaking it down into smaller problems known as the base case. In programming we have functions that calls itself in order to recursively handle each instance of the situation. In class for example we had a problem that dealt with finding the maximum number of list and possibly a nested list. We would check each element of the list to check whether the type was a list or an integer. If it was a list, then the function would do a recursive call and run the function again doing the same procedure. If the element is not a list, we would append the number to a new list through list comprehension. Furthermore we can also see that this is our base case which will help us avoid any possibility of an infinite recursion call.

Tuesday 21 January 2014

Week 3: Object-Oriented Programming

     In modern day programming, object oriented programming (OOP) is the most recognized method in creating new software. It combines simplicity and functionality by utilizing objects (datatypes that represent their real life counterpart). Objects must follow a specific guideline based on their class. Classes which are also another fundamental part of OOP contains the properties and method blueprints for objects in order for it to function. Properties such as age, height and birthdate are descriptions that a class may contain and methods are like attributes such as run, jump or eat. Blueprint then can allow for various types of different objects each with their own unique set of properties and methods. By operating with objects that can be thought as mini programs themselves, it makes a more efficient and organised way to design a program.