Sensory Systems/Visual System/Image Processing
A Wikibookian suggests that this book or chapter be merged into ?? because: speedy deletion declined - this page has a history that needs merging somewhere... Please discuss whether or not this merge should happen on the discussion page. |
Image Processing
[edit | edit source]One major technical tool to understand is the way a computer handles images. We have to know how we can edit images and what techniques we have to rearrange images.
Image Representation
[edit | edit source]Grayscale
[edit | edit source]For a computer an image is nothing more than a huge amount of little squares. These squares are called "pixel". In a grayscale image, each of this pixel carries a number n, often it holds . This number n, represents the exactly color of this square in the image. This means, in a grayscale image we can use 256 different grayscales, where 255 means a white spot, and 0 means the square is black. To be honest, we could even use more than 256 different levels of gray. In the mentioned way, every pixels uses exactly 1 byte (or 8 bit) of memory to be saved. (Due to the binary system of a computer it holds: 28=256) If you think it is necessary to have more different gray scales in your image, this is not a problem. You just can use more memory to save the picture. But just remember, this could be a hard task for huge images. Further quite often you have the problem that your sensing device (e.g. your monitor) can not show more than this 256 different gray colors.
Colour
[edit | edit source]Representing a colourful image is only slightly more complicated than the grayscale picture. All you have to know is that the computer works with a additive colour mixture of the three main colors Red, Green and Blue. This are the so called RGB colours.
Also these images are saved by pixels. But now every pixel has to know 3 values between 0 and 256, for every Color 1 value. So know we have 2563= 16,777,216 different colours which can be represented. Similar to the grayscale images also here holds, that no color means black, and having all color means white. That means, the colour (0,0,0) is black, whereas (0,0,255) means blue and (255,255,255) is white.
Image Filtering
[edit | edit source]1D Filter
[edit | edit source]In many technical applications, we find some primitive basis in which we easily can describe features. In 1 dimensional cases filters are not a big deal, therefore we can use this filters for changing images. The so called "Savitzky- Golay Filter" allows to smooth incoming signals. The filter was described in 1964 by Abraham Savitzky and Marcel J. E. Golay. It is a impulse-respond filter (IR).
For better understanding, lets look at a example. In 1d we usually deal with vectors. One such given vector, we call x and it holds: . Our purpose is to smooth that vector x. To do so all we need is another vector , this vector we call a weight vector.
With we now have a smoothed vector y. This vector is smoother than the vector before, because we only save the average over a few entries in the vector. These means the newly found vectorentries, depends on some entries right left and right of the entry to smooth. One major drawback of this approach is, the newly found vector y only has n-m entries instead of n as the original vector x.
Drawing this new vector would lead to the same function as before, just with less amplitude. So no data is lost, but we have less fluctuation.
2D Filter
[edit | edit source]Going from the 1d case to the 2d case is done by simply make out of vectors matrices. As already mentioned, a gray-level image is for a computer or for a softwaretool as MATLAB nothing more, than a huge matrix filled with natural numbers, often between 0 and 255.
The weight vector is now a weight-matrix. But still we use the filter by adding up different matrix-element-multiplications.
Dilation and Erosion
[edit | edit source]For linear filters as seen before, it holds that they are commutativ. Cite from wikipedia: "One says that x commutes with y under ∗ if:
- "
In other words, it does not matter how many and in which sequence different linear filters you use. E.g. if a Savitzky-Golay filter is applied to some date, and then a second Savitzky-Golay filter for calculationg the first derivative, the result is the same if the sequence of filters is reversed. It even holds, that there would have been one filter, which does the same as the two applied.
In contrast morphological operations on an image are non-linear operations and the final result depends on the sequence. If we think of any image, it is defined by pixels with values xij. Further this image is assumed to be a black-and-white image, so we have
To define a morphological operation we have to set a structural element SE. As example, a 3x3-Matrix as a part of the image.
The definition of erosion E says:
- .
So in words, if any of the pixels in the structural element M has value 0, the erosion sets the value of M, a specific pixel in M, to zero. Otherwise E(M)=1
And for the dilation D it holds, if any value in SE is 1, the dilation of M, D(M), is set to 1.
- .
Compositions of Dilation and Erosion: Opening and Closing of Images
[edit | edit source]There are two compositions of dilation and erosion. One called opening the other called closing. It holds: