Jump to content

Foundations of Computer Science/Abstraction and Recursion

From Wikibooks, open books for an open world

Abstraction and Recursion

[edit | edit source]

Programming is easy as long as the programs are small. Inevitably our programs will grow larger and larger as we create them to solve increasingly complex problem. One technique we use to keep our algorithms and programs simple is abstraction, which is an idea widely used in many fields such as art, math and engineering. In this chapter we will study how to apply this technique is algorithm design and programming.

An abstraction removes details to help us focus our attention. For instance, a car present a set of simple controls as an interface to its drivers. As long as we know how to use the interface we can drive the car without knowing how it operates under the hood. The internal working of a car is unnecessary detail to drivers unless they are race car drivers who need this type knowledges to drive the car most efficiently. This interface hasn't change much since when the first car was made. Abstraction also generalizes concepts by extracting common features from specific examples. This car interface extracts common car features (what a driver needs to know to drive a car) from all kinds of cars. Once you learn how to drive one car you have learned how to drive all cars of the same type. This is a powerful idea. Such abstraction also gives car makers the freedom to change the internal design of a car without affecting the users.

Abstraction in Computing

[edit | edit source]

We have learned that algorithm design is an integral part of problem solving using computing and a first step in programming. The hardest part isn't programming/coding but the keeping track of details in large programs. There are two primary ways to keep our programs "small": chunking and layering, which are two metaphors for abstraction. Chunking breaks down (decompose) functionality into smaller units and let units interact with each other through a well-defined interface. For instance, in Snap! you can implement an algorithm as a block, which then can be used anywhere in your script as long as you can call the block with a proper sequence of parameters according to the interface. Layering separates the functional units (blocks) into layers to separate concerns and simply interactions patterns to make the complexity more manageable. The following figure illustrates the idea of layering.


By organizing functional units (blocks) into layers we can simply the interactions and allow concurrent development of the layers.

In the figure each layer relies on the layer below it to function and provides services to the layer above it. For example, a unit in layer one is only allowed to call units in layer 2 below it. All interactions are limited to pairs of layers that are next to each other in a stack of layers. We could replace a layer completely with a new implementation without affecting the rest of the stack, which achieves modularity. On the contrary if any arbitrary interaction is allowed we may end up with tightly coupled system as shown in the following figure.


Without any restriction any unit (block) can call any other unit.

Abstraction Examples

[edit | edit source]

Using abstraction to achieve simplification and generalization is truly a powerful organizing idea. Recall the thought experiment in chapter one, in which we built a machine that can potentially solve groups of equations. The machine was built through abstraction - we construct larger building blocks using smaller (more elementary) ones and treat each block as a unit ignoring the internal details.

The Snap! environment allows us to construct programs in a similar fashion. When a block is defined it becomes a new building block (user defined). The block can be arbitrarily complex, but to use it we only need to know the interface - its name and its list of parameters. The unnecessary details are hidden from the users greatly simplifying the thinking involved in programming.

Lets look at a concrete example. In order to make a sprite to draw different equal-lateral shapes we can create a block for drawing each type of shapes such as triangles, squares, pentagons, and etc. A block in Snap! is an abstraction that hides details and represents a certain behavior/intention. The following block draws a square with a specific size.


This Snap! block draws a square with a specific size.

To draw a triangle we can use a similar logic structure.


This Snap! block draws an equal-lateral triangle.

The draw triangle block repeat three times to draw each side with a turn of 120 degrees. Do you know why the sprite has to turn 120 degrees to form a 60 angel between the two sides? I hope you have noticed that the same logic structure can be used to create blocks for drawing any equal-lateral shapes. Instead of creating a block for each shape we can generalize the task into one block that can draw any shapes. This block needs an additional piece of information - the number of sides.


The Snap! block can draw any equal-lateral shape.

With the number of sides we can determines the internal angle of the shape, which is all we need to draw the shape. Please checkout this resource if you are not sure how to calculate the internal angle using the number of sides. This block can serve as an abstraction of the tasks of drawing equal-lateral shapes (polygons). You may have noticed the length of the sides is hard-coded (typed in as a constant not a parameter). What if we want to draw shapes of different size? We can further generalize the function of the block by adding anther parameter and use it to control the side length.


This Snap! block draws any equal-lateral polygon of any size.

Through this example, we have demonstrated defining blocks to abstract away details of a task and generalizing a solution to solve more problems.

Recursion

[edit | edit source]

Recursion is a pattern that is self-similar - the whole consists of smaller parts that are structurally similar to the whole. For example, a tree consist of branches that look like smaller trees. Similarly, a directory tree of a file system on a computer and an ancestry tree genealogy exhibit a similar pattern. The following figure shows a recursive tree.


Tree created using the Logo programming language and relying heavily on recursion.

Self-similarity allows us to define concepts that exhibit such a pattern in a more concise and elegant way. A tree can be either a trunk with no branches or a trunk with a number of branches, each of which is a tree. This definition covers all possible tree structures. How would you describe the following picture?

If you were to do it by delving into finer and finer details repeatedly it never ends. Can you define the picture recursively? Another example is the definition of the factorial function in math: and for all This recursive definition not only defines factorial but also describes a way to calculate factorial. For example, 5! can be calculated from 4!, which is 4 times 3!, which is 3 time 2!, which is 2 times 1!, which is 1 by definition.

If the problem we are solving is self-similar - solving the whole problem is a matter of solving the parts that are similar to the whole - the solution we are defining for the whole can be used to solve the subproblems(recursion). The beauty of a recursive solution is that the definition of the problem is the solution as shown in the factorial example. To design a recursive solution we practice wishful thinking - as we describe the solution to whole problem we wish it is fully defined and can be used to solve the smaller subproblems. In computer programming, this is called recursive thinking/programming. To program recursively, we describe the solution to a problem as a function/procedure/block, in which we break the bigger problems into smaller ones and use the function we defining to solve the smaller problems. If the problem is finite, eventually the smaller problems are so simple that they are directly solvable. In such cases the recursion stops. We call those cases base cases. By the time our program reaches all base cases, we would have solved the whole problem because all the subproblems are solved including the problem we start with. In any recursive function, two essential parts must exist: base cases and recursive cases. The recursive cases part breaks a bigger problem into smaller ones. The base cases stops the recursion by solving the directly solvable problems. If both parts exists and are structured properly, the algorithm (function) can solve problems of any size by asking clones of itself to solve partial problems. Recursive problem solving is a powerful idea because it simples our thinking: if you can define a problem recursively you can solve it recursively. Recursive solutions are more elegant and easy to verify, but it only lends itself to problems that can be defined recursively.

Before we study some concrete examples we will introduce the concept of function, which makes recursive solutions more manageable. A block in Snap! is considered a function (similar to a math function) if it has the following properties:

  • a function takes an arbitrary number of inputs (zero or more)
  • a function always returns/reports exactly one result value
  • for the same input a function always reports the same result value
  • the execution of a function has no side effects to the environment

With such restrictions functions in Snap! are blocks that performance a task in isolation (in its own world) and hand off the result to be further processed. According to the definition what blocks in the following list are functions?


Some blocks in this list are functions.

Recursion Examples

[edit | edit source]

Consider the binary search algorithm:

Find an item (target) in a sorted list
 if the list is empty, the target cannot be found
 consider the item in the middle, if it matches the target you are done 
 otherwise, decide which half to search next

To search one half of the ordered list is just a smaller version/part of the overall problem so we can apply the same algorithm wishing the algorithm is fully defined. This process goes on till a list is empty or the search target is found.

Clearly the base cases are

  • if the list is empty, then the target can be found
  • if target in the middle of a list, then the target is found

The recursive cases are (assume the list is sorted in ascending order)

  • if the item in the middle of a list is smaller than the search target, continue searching in the second half of the list
  • otherwise, continue searching in the first half of the list