Algorithms


An algorithm defines a detailed set of steps to obtain a result. An algorithm is a formal definition of these steps that can be implemented in a programming language. It has defined input and outputs, and is expected to complete in finite time.

Any algorithm implies some processing cost - in terms of space and time. This complexity of the algorithm is measured in terms of the amount of data it processes. These are called S(n) and T(n) respectively. The overall complexity is called C(n).

This helps us identify the cost of the algorithm for data measured by n. The complexity of some algorithms grows linearly with n. Some others grow with n2 - denoted as C(n2), and so on. There could be some algorithms that grow with en - denoted by C(en).

When we compare two algorithms, we need to look into these aspects. We need to check out our constraints of space, time and overall complexity. Then evaluate the algorithms based on the S(n), T(n) and C(n). The resource requirement are often not simple. We have best case and worst case and average case requirements.

Each algorithm is unique. But, they are often classified based on their approach to the problem. There is no right or wrong, or better or worse approach here. These are more of terms that make it easier to express and communicate the specific steps of an algorithm. Typically, any algorithm follows different approaches at each step.

  • Greedy Algorithm: This implies starting with the largest side of the problem, or with maximum amount of data.
  • Lazy Algorithm: This is another approach that works on postponing stuff until it is really needed. This tends to reduce the resource utilization by avoiding effort on things that are not required at all.
  • Divide and Conquer: As the name suggests, this algorithm works by dividing the problem over and over, till we have very small atomic pieces that can be solved easily.