Edge Detection


When we look around, the first thing we tend to do is to identify objects around us - based on the shape and sizes. And how do we identify the shapes and sizes? By identifying the edges in the image. An edge is a point or a line that marks a drastic change in the colors or brightness. Whenever we look around, an edge is the first thing we notice.

If we want to build an application that "sees" and identifies objects, we too need to start from the edges. Although it is a trivial task for our eyes, it is not so simple in a software application. The image that a software application processes is a long array of bytes - could be a few mega bytes. How do we parse this long array to identify edges in such an image? And how do we decide if two objects in two images are the same? How do we identify that the two photographs show the same person? Humans have absolutely no problem doing that?

For many years, researchers have worked on this problem and provided us with many different ways of working identifying the edges and then attempting to get the shapes. But none of them did anything meaningful enough to go beyond the laboratories.

However, with recent developments in neural networks and deep learning, these applications have already reached the masses. Face recognition is a fabulous application of concepts that started with edge detection and computer vision. Today's computer vision has surpassed most boundaries of our imagination. But their core continues to be edge detection.