DIGITAL IMAGE PROCESSING
15CS753
DIGITAL IMAGE PROCESSING
Subject Code 15CS753 IA Marks 20
Number of Lecture Hours/Week 3 Exam Marks 80
Total Number of Lecture Hours 40 Exam Hours 03
CREDITS – 03
10/28/2020
2
DIGITAL IMAGE PROCESSING
■ Text Books:
■ 1. Rafael C G., Woods R E. and Eddins S L, Digital Image Processing,
Prentice Hall, 3rd Edition, 2008.
■ Reference Books:
1. Milan Sonka, “Image Processing, analysis and Machine Vision”, Thomson
Press India Ltd, Fourth Edition.
2. Fundamentals of Digital Image Processing- Anil K. Jain, 2nd Edition,
Prentice Hall of India.
3. S. Sridhar, Digital Image Processing, Oxford University Press, 2nd Ed,
2016.
10/28/2020
3
DIGITAL IMAGE PROCESSING
■ Module – 1 8 Hours
■ Introduction Fundamental Steps in Digital Image Processing,
■ Components of an Image Processing System,
■ Sampling and Quantization,
■ Representing Digital Images (Data structure),
■ Some Basic Relationships Between Pixels- Neighbors and Connectivity of pixels
in image,
■ Applications of Image Processing:
– Medical imaging,
– Robot vision,
– Character recognition,
– Remote Sensing
10/28/2020
4
Image processing
■ Signal processing is a discipline that deals with analysis and processing
of analog and digital signals and deals with storing , filtering , and
other operations on signals.
■ These signals include transmission signals , sound or voice signals ,
image signals , and other signals.
■ The field that deals with the type of signals, for which the input is an
image and the output is also an image is image processing.
■ As name suggests, it deals with the processing on images.
10/28/2020
5
Image processing
■ What is an Image ??
■ An image may be defined as a two-dimensional function, f(x,y),
– where x and y are spatial (plane) coordinates, and
– the amplitude of f at any pair of coordinates (x, y) is called the
intensity or gray level of the image at that point.
■ When x, y, and the intensity values of f are all finite, discrete
quantities, we call the image a digital image.
■ The field of digital image processing refers to processing digital
images by means of a digital computer.
■ A digital image is composed of a finite number of elements, each of
which has a particular location and value.
■ These elements are called picture elements, or pixels.
10/28/2020
6
Image processing
10/28/2020
7
Image processing
■ Images play the most important role in human perception.
■ However, we human beings, are limited to the visual band of the
electromagnetic (EM) spectrum, but imaging machines cover almost the
entire EM spectrum, ranging from gamma to radio waves.
■ They can operate on images generated by sources that humans are not
accustomed to associating with images.
■ These include ultrasound, electron microscopy, and computer-generated
images.
■ Thus, digital image processing covers a wide and varied field of applications
10/28/2020
1
Image processing
■ There is no fixed boundary regarding where image processing stops and
other related areas, such as image analysis and computer vision, start.
■ Sometimes a distinction is made by defining image processing as a
discipline in which both the input and output of a process are images.
■ But this will be a limiting and artificial boundary.
– Even the trivial task of computing the average intensity of an image
(which yields a single number) would not be considered an image
processing operation.
■ On the other hand, the fields such as computer vision whose ultimate goal is
to use computers to emulate human vision, including learning and being able
to make inferences and take actions based on visual inputs.
10/28/2020
2
Image processing
■ There are no clear-cut boundaries image processing and computer vision.
■ But we may consider three types of computerized processes low-, mid-, and
high-level processes.
■ Low-level processes involve primitive operations such as image
preprocessing to reduce noise, contrast enhancement, and image sharpening.
■ Here both inputs and outputs are images.
■ Mid-level processing on images involves tasks such as segmentation
(partitioning an image into regions or objects), description of those objects
to reduce them to a form suitable for computer processing, and classification
(recognition) of individual objects.
■ In this, inputs generally are images, but its outputs are attributes extracted
from those images (e.g., edges, contours, and the identity of individual
objects).
10/28/2020
3
Image processing
■ Higher-level processing involves “making sense” of an ensemble of
recognized objects, as in image analysis, and performing the cognitive
functions normally associated with vision.
■ Here we can see that a logical overlap between image processing and image
analysis exits and that is the area of recognition of individual regions or
objects in an image.
■ Thus, we can say digital image processing includes processes whose inputs
and outputs are images and, also processes that extract attributes from
images, up to and including the recognition of individual objects
10/28/2020
4
Image processing
■ To clarify these concepts, Lets see example of OCR.
■ This involves
■ The processes of acquiring an image of the area containing the text,
■ Preprocessing that image,
■ Extracting (segmenting) the individual characters,
■ Describing the characters in a form suitable for computer processing, and
■ Recognizing those individual characters are in the scope of DIP
10/28/2020
5
Image processing
■ Making sense of the content of the page may be seen as a part of image
analysis and even computer vision, depending on the level of complexity
implied by the statement “making sense.”
10/28/2020
6
Fundamental Steps in Digital Image Processing
■ We saw two broad categories of image processing
– methods whose input and output are images, and
– methods whose inputs may be images but whose outputs are attributes
extracted from those images.
■ This organization is summarized in following Fig.
10/28/2020
7
Image processing
10/28/2020
8
Image processing
■ The diagram does not imply that every process is applied to an image.
■ It gives an idea of all the methodologies that can be applied to images for
different purposes and possibly with different objectives.
■ Let us briefly discuss the overview of each of these methods
■ Image acquisition
■ The first process in Fig.
■ We see more methods of image acquisition in subsequent sections
■ Acquisition could be as simple as being given an image that is already in
digital form.
10/28/2020
9
Image processing
■ Image enhancement:
■ The process of manipulating an image so that the result is more suitable than the
original for a specific application.
■ The word “specific” establishes at the outset that enhancement techniques are
problem oriented.
■ Thus, for example, a method that is quite useful for enhancing X-ray images may
not be the best approach for enhancing satellite images taken in the infrared band
of the electromagnetic spectrum.
■ This process enhances the quality.
■ As an output of image enhancement, certain parts of the image will be highlighted
while it will remove or blur the unnecessary parts of the image.
■ Image restoration:
■ This also deals with improving the appearance of an image but it is objective,
■ That is, restoration techniques mostly based on mathematical or probabilistic
models of image degradation.
10/28/2020
1
Image processing
■ But Enhancement, is based on human subjective preferences regarding what
constitutes a “good” enhancement result.
■ Color image processing
■ An area that has been gaining in importance because of the significant
increase in the use of digital images over the Internet.
■ Wavelets:
■ These are the foundation for representing images in various degrees of
resolution.
■ In particular, this is used for image data compression and for pyramidal
representation, in which images are subdivided successively into smaller
regions
10/28/2020
2
Image processing
■ Compression:
■ Deals with techniques for reducing the storage required to save an image, or
the bandwidth required to transmit it.
■ Although storage technology has improved over the years, the transmission
capacity might not have been improved.
■ Image compression is familiar to most users of computers in the form of
image file extensions, such as the jpg file extension used in the
■ JPEG (Joint Photographic Experts Group) image compression standard.
10/28/2020
3
Image processing
■ Morphological processing
■ deals with tools for extracting image components that are useful in the
representation and description of shape.
■ Segmentation procedures partition an image into its constituent parts or
objects.
■ Representation and description -usually follows the output of a segmentation
stage, which usually is raw pixel data, constituting either the boundary of a
region (i.e., the set of pixels separating one image region from another) or
all the points in the region itself.
■ In either case, the data has to be converted to a form suitable for computer
processing
10/28/2020
4
Image processing
■ Recognition is the process that assigns a label (e.g., “vehicle”) to an object
based on its descriptors.
■ This is the last stage of digital image processing where the methods for
recognition of individual objects is developed
■ Knowledge about a problem domain is coded into an image processing
system in the form of a knowledge database.
■ This knowledge may be as simple as detailing regions of an image where the
information of interest is known to be located, thus limiting the search that
has to be conducted in seeking that information.
10/28/2020
5
Image processing
■ :The knowledge base also can be quite complex, such as an interrelated list
of all major possible defects in a materials inspection problem or an image
database containing high-resolution satellite images of a region in
connection with change-detection applications.
■ In addition to guiding the operation of each processing module, the
knowledge base also controls the interaction between modules.
■ This distinction is made in Fig by means of use of double-headed arrows
between the processing modules and the knowledge base, as opposed to
single-headed arrows linking the processing modules.
10/28/2020
6
Components of an Image Processing System
10/28/2020
7
Components of an Image Processing System
■ Image sensing:
■ Two elements required for acquiring digital image:
– A physical device that is sensitive to the energy radiated by the object we
wish to image.
– The second, called a digitizer, is a device for converting the output of the
physical sensing device into digital form
■ E.g.:
■ In a digital video camera, the sensors produce an electrical output
proportional to light intensity.
■ The digitizer converts these outputs to digital data.
10/28/2020
8
Components of an Image Processing System
■ Specialized image processing hardware:
■ Consists of the digitizer plus hardware that performs other primitive
operations, such as an ALU, that performs arithmetic and logical operations
in parallel on entire images.
■ This type of hardware is also called a front-end subsystem, and its most
distinguishing characteristic is speed.
■ This unit performs functions that require fast data throughputs that the
typical main computer cannot handle.
■ The computer
■ This could be is a general-purpose computer ranging from a PC to a
supercomputer.
■ For any general-purpose image processing systems, any well-equipped PC-
type machine is suitable for off-line image processing tasks.
10/28/2020
9
Components of an Image Processing System
■ Image processing Software:
■ consists of specialized modules that perform specific tasks.
■ A well-designed package facilitates the user to write minimum code that,
utilizes the available specialized modules.
■ More sophisticated software packages allow the integration of those
modules and general-purpose software commands from at least one
computer language.
10/28/2020
10
Components of an Image Processing System
■ Mass storage:
■ Is a must in image processing applications.
■ W.k.t. Storage is measured in bytes, Kbytes, Mbytes, Gbytes and Tbytes
■ An image of size 1024 X 1024 pixels, in which the intensity of each pixel is
an 8-bit quantity, requires 1MB of storage space if the image is not
compressed.
■ When dealing with large number of images, providing adequate storage in
an image processing system can be a challenge.
■ Digital storage for image processing applications are of three categories:
– Short-term storage for use during processing,
– On-line storage for relatively fast recall, and
– Archival storage, characterized by infrequent access.
10/28/2020
11
Components of an Image Processing System
■ One method of providing Short-term storage is computer memory.
■ Another is by specialized boards, called frame buffers, that store one or
more images and can be accessed rapidly, usually at video rates (e.g., at 30
complete images per second).
■ Frame buffers are located in the specialized image processing hardware unit
(as shown in Fig.)
■ On-line storage generally takes the form of magnetic disks or optical-media
storage.
■ Finally, archival storage is characterized by massive storage requirements
but infrequent need for access.
■ Magnetic tapes and optical disks housed in “jukeboxes” are the usual media
for archival applications.
10/28/2020
12
Components of an Image Processing System
■ Image displays used today are mainly color TV monitors.
■ Monitors are driven by the outputs of image and graphics display cards that
are an integral part of the computer system.
■ In some cases, it is necessary to have stereo displays, and these are
implemented in the form of headgear containing two small displays
embedded in goggles worn by the user.
■ Hardcopy devices are used for recording images
■ E.g.: laser printers, film cameras, heat-sensitive devices, inkjet units, and
digital units, such as optical and CDROM disks.
10/28/2020
13
Components of an Image Processing System
■ Networking is a default function in any computer system in use today.
■ Because of the large amount of data inherent in image processing
applications, main issue is the bandwidth for image transmission
■ In dedicated networks, this is not a problem, but communications with
remote sites via the Internet are not always as efficient.
■ Fortunately, this situation is improving quickly as a result of optical fiber
and other broadband technologies
10/28/2020
14
Sampling and Quantization
■ Before proceeding with this step we must have acquired the image of
interest using any of the available methods
■ The output of most sensors is a continuous voltage waveform whose
amplitude and spatial behavior are related to the physical phenomenon being
sensed.
■ To create a digital image, we need to convert the continuous sensed data into
digital form.
■ This involves two processes: sampling and quantization.
10/28/2020
15
Sampling and Quantization
■ Basic Concepts
■ An image may be continuous with respect to the x- and y-coordinates,
■ and also in amplitude
■ Let us say figure below shows a continuous image f that we want to convert
to digital form
10/28/2020
16
Sampling and Quantization
■ To convert it to digital form, we have to sample the function in both
coordinates and in amplitude.
■ Digitizing the coordinate values is called sampling.
■ Digitizing the amplitude values is called quantization.
■ The plot of amplitude (intensity level) values of the continuous image f
along the line segment AB is shown below.
■ The random variations are due to image noise
10/28/2020
17
Sampling and Quantization
■ To sample this function, we take equally spaced samples along line AB, as shown
in Fig. below.
■ The samples are shown as small white squares superimposed on the function.
■ The spatial location of each sample is indicated by a vertical tick mark in the
bottom part of the figure.
10/28/2020
18
Sampling and Quantization
■ The set of these discrete locations gives the sampled function.
■ The values of the samples still span (vertically) a continuous range of
intensity values.
■ In order to form a digital function, the intensity values also must be
converted (quantized) into discrete quantities.
■ The right side of Fig. shows the intensity scale divided into eight discrete
intervals, ranging from black to white.
■ The vertical tick marks indicate the specific value assigned to each of the
eight intensity intervals.
■ The continuous intensity levels are quantized by assigning one of the eight
values to each sample.
■ The assignment is made depending on the vertical proximity of a sample to a
vertical tick mark.
10/28/2020
19
Sampling and Quantization
■ The digital samples resulting from both sampling and quantization are
shown in Fig.
■ Starting at the top of the image and carrying out this procedure line by line
produces a two-dimensional digital image.
■ It is implied in Fig. that, in addition to the number of discrete levels used,
the accuracy achieved in quantization is highly dependent on the noise
content of the sampled signal
10/28/2020
20
Sampling and Quantization
• This is used to convert the acquired image into digital form.
• Before applying this step we must have acquired the image of interest
using any of the available methods
• The output of most sensors is a continuous voltage waveform whose
amplitude and spatial behavior are related to the physical phenomenon
being sensed.
• To create a digital image, we need to convert the continuous sensed data
into digital form.
• This involves two processes: sampling and quantization.
10/28/2020
1
Sampling and Quantization
• Basic Concepts
• An image may be continuous with respect to the x- and y-coordinates, and also in
amplitude
• Let us say figure below shows a continuous image f that we want to convert to
digital form
10/28/2020
2
Sampling and Quantization
• To convert it to digital form, we have to sample the function in both coordinates
and in amplitude.
• Digitizing the coordinate values is called sampling.
• Digitizing the amplitude values is called quantization.
• The plot of amplitude (intensity level) values of the continuous image f along the
line segment AB is shown below.
• The random variations are due to image noise
10/28/2020
3
Sampling and Quantization
• To sample this function, we take equally spaced samples along line AB, as shown
in Fig. below.
• The samples are shown as small white squares superimposed on the function.
• The spatial location of each sample is indicated by a vertical tick mark in the
bottom part of the figure.
10/28/2020
4
Sampling and Quantization
• The set of these discrete locations gives the sampled function.
• The values of the samples still span (vertically) a continuous range of intensity
values.
• In order to form a digital function, the intensity values also must be converted
(quantized) into discrete quantities.
• The right side of Fig. shows the intensity scale divided into eight discrete
intervals, ranging from black to white.
• The vertical tick marks indicate the specific value assigned to each of the eight
intensity intervals.
• The continuous intensity levels are quantized by assigning one of the eight values
to each sample.
• The assignment is made depending on the vertical proximity of a sample to a
vertical tick mark.
10/28/2020
5
Sampling and Quantization
• The digital samples resulting from both sampling and quantization are shown in
Fig.
• Starting at the top of the image and carrying out this procedure line by line
produces a two-dimensional digital image.
• It is implied in Fig. that, in addition to the number of discrete levels used, the
accuracy achieved in quantization is highly dependent on the noise content of the
sampled signal
10/28/2020
6
Representing Digital Images
• Let f(s, t) represent a continuous image function of two continuous variables, s
and t.
• We convert this function into a digital image by sampling and quantization.
• Suppose that we sample the continuous image into a 2-D array, f(x, y), containing
M rows and N columns, where (x, y) are discrete coordinates.
• We use integer values for these discrete coordinates: x = 0, 1, 2, ..M-1 and y = 0,
1,.. N-1.
• Thus, the value of the digital image at the origin is f(0, 0), and the next coordinate
value along the first row is f(0, 1).
• Notation (0, 1) implies the second sample along the first row.
• In general, the value of the image at any coordinates (x, y) is denoted f(x,y), where
x and y are integers.
10/28/2020
7
Representing Digital Images
• Three basic ways to represent f(x, y):
• First method is a plot of the function f(x, y), with two axes determining spatial
location and the third axis being the values of f (intensities) as a function of the
two spatial variables x and y as shown below
• This representation is useful when working with gray-scale sets whose elements
are expressed as triplets of the form (x, y, z), where x and y are spatial coordinates
and z is the value of f at coordinates (x, y)
10/28/2020
8
Representing Digital Images
• Second method is as shown below
• This is much more common.
• Here, the intensity of each point is proportional to the value of f at that point.
• In this figure, there are only three equally spaced intensity values.
• If the intensity is normalized to the interval [0, 1], then each point in the image has
the value 0, 0.5, or 1.
• A monitor or printer simply converts these three values to black, gray, or white,
respectively
10/28/2020
9
Representing Digital Images
• The third representation is to display the numerical values of f(x, y) as an array
(matrix).
• In this example, f is of size 600 X 600 elements, or 360,000 numbers.
• The last two representations are the most useful.
• Image displays allow us to view results at a glance and Numerical arrays are used
for processing and algorithm development.
10/28/2020
10
Representing Digital Images
• The third representation is to display the numerical values of f(x, y) as an array
(matrix).
• In this example, f is of size 600 X 600 elements, or 360,000 numbers.
• The last two representations are the most useful.
• Image displays allow us to view results at a glance and Numerical arrays are used
for processing and algorithm development.
10/28/2020
1
Sampling and Quantization
• We can write the representation of an M X N numerical array in equation form as
• Both sides of this equation are equivalent ways of expressing a digital image
• The right side is a matrix of real numbers.
• Each element of this matrix is called an image element, picture element, pixel, or
pel.
• We stick on to the terms image and pixel to denote a digital image and its elements
respectively.
10/28/2020
2
Sampling and Quantization
• It is also advantageous to use traditional matrix notation to denote a digital image
and its elements as shown below
• Here any aij = f(x=i, y=j) = f(i,j)
• We can represent an image as a vector, v.
• For example, a column vector of size MN X 1 is formed by letting the first M
elements of v be the first column of A, the next M elements be the second column,
and so on.
• Alternatively, we can use the rows instead of the columns of A to form such a
vector.
• Either representation is valid, as long as we are consistent.
10/28/2020
3
Sampling and Quantization
• In the figure below we see that the origin of a digital image is at the top
left, with the positive x-axis extending downward and the positive y-
axis extending to the right.
• This is a conventional representation based on the fact that many image
displays (e.g., TV monitors) sweep an image starting at the top left and
moving to the right one row at a time.
10/28/2020
4
Sampling and Quantization
• Mathematically, the first element of a matrix is by convention at the top
left of the array, so choosing the origin of at that point makes sense.
• We henceforth show the axes pointing downward and to the right,
instead of to the right and up.
• The digitization process requires that decisions be made regarding the
values for M, N, and for the number, L, of discrete intensity levels.
• There are no restrictions placed on M and N, other than they have to be
positive integers.
• Due to storage and quantizing hardware considerations, the number of
intensity levels typically is an integer power of 2, L = 2k
10/28/2020
5
Sampling and Quantization
• We assume that the discrete levels are equally spaced and that they are
integers in the interval [0,L-1]
• Sometimes, the range of values spanned by the gray scale is referred to
informally as the dynamic range.
• This is a term used in different ways in different fields.
• We define the dynamic range of an imaging system to be the ratio of the
maximum measurable intensity to the minimum detectable intensity
level in the system.
• As a rule, the upper limit is determined by saturation and the lower limit
by noise
10/28/2020
6
Sampling and Quantization
• Saturation is the highest value beyond which all intensity levels are
clipped
• Noise in this case appears as a grainy texture pattern.
• Noise, especially in the darker regions of an image (e.g., the stem of the
rose) masks the lowest detectable true intensity level
10/28/2020
7
Sampling and Quantization
10/28/2020
8
Sampling and Quantization
• Dynamic range of an imaging system is defined as the ratio of the
maximum measurable intensity to the minimum detectable intensity
level in the system.
• As a rule, the upper limit is determined by saturation and the lower limit
by noise
• Dynamic range establishes the lowest and highest intensity levels that a
system can represent and, consequently, that an image can have.
• Image contrast is closely associated with this, which is defined as the
difference in intensity between the highest and lowest intensity levels in
an image.
10/28/2020
9
Sampling and Quantization
• The number of bits required to store a digitized image is given by
• b = M *N * k
• When M = N this equation becomes
• b = N2*k
• Following table shows the number of bits required to store square
images with various values of N and k.
10/28/2020
10
Sampling and Quantization
• An image of 2k intensity levels, is referred as a “k-bit image.”
• E.g.: an image with 256 possible discrete intensity values is called an 8-
bit image.
10/28/2020
11
Sampling and Quantization
• We can see that, 8-bit images of size 1024X1024 and higher are demanding for
significant storage requirements
• Spatial resolution and Intensity resolution:
• We know that, pixel is the smallest element of an image and it can store a value
proportional to the light intensity at that particular location.
• Resolution is a common term used with images
• The resolution can be defined in many ways.
• Pixel resolution,
• Spatial resolution,
• Temporal resolution,
• Spectral resolution.
• Let us see pixel resolution first
10/28/2020
12
Sampling and Quantization
• You might have seen that in PC settings, monitor resolution of 800 x 600, 640 x 480 etc
• In pixel resolution, the term resolution refers to the total number of count of pixels in an
digital image.
• If an image has M rows and N columns, then its resolution can be defined as M X N.
• If we define resolution as the total number of pixels, then pixel resolution can be defined
with set of two numbers.
• The first number, the width of the picture, or the pixels across columns, and the second
number is height of the picture, or the pixels across its width.
• We can say that the higher is the pixel resolution, the higher is the quality of the image.
• We can define pixel resolution of an image as 4500 X 5500
10/28/2020
13
Spatial and Intensity resolution
• Spatial resolution
• Spatial resolution can be defined as the smallest observable detail in an image.
• Can be defined in several ways
• Dots (pixels) per unit distance,
• Line pairs per unit distance
• spatial resolution means that we cannot compare two different types of images to
see that which one is clear or which one is not.
• If we have to compare the two images, to see which one is more clear or which
has more spatial resolution, we have to compare two images of the same size.
10/28/2020
1
Spatial and Intensity resolution
• Both the pictures has same dimensions which are of 227 X 222.
• Now when we compare them, we see that the picture on the left side has
more spatial resolution or it is more clear then the picture on the right side
which is a blurred image.
10/28/2020
2
Spatial and Intensity resolution
• Measuring spatial resolution
• Since the spatial resolution refers to clarity, so for different devices, different
measure has been made to measure it.
• Dots per inch (DPI) - usually used in monitors.
• Lines per inch (LPI) - usually used in laser printers.
• Pixels per inch (PPI) is measure for different devices such as tablets , Mobile
phones e.t.c.
• Let us see the effects of reducing spatial resolution in an image.
• The images in Figs. (a) through (d) are shown in 1250, 300, 150, and 72 dpi,
respectively.
10/28/2020
3
10/28/2020
4
Spatial and Intensity resolution
• As spatial resolution reduces, image size also reduces.
• Image (a) was of size 3692 X 2812 pixels while image (d) was if 213 X 162
• To compare the images, smaller images are brought back to the original size by
zooming them.
• Images (a) and (b) there are no much differences
• But (c) and (d) show significant degradation
10/28/2020
5
Spatial and Intensity resolution
• Intensity resolution:
• Refers to the smallest observable change in the intensity level
• It is defined as the number of bits used to quantize the intensity
• We have seen that, the number of intensity levels choses is usually power of 2
• Most commonly used number is 8 bits
• E.g.: an image with intensity quantized to 256 levels is said to have 8-bit intensity
resolution
• Effects of varying the number of intensity levels in a digital image.
• Consider a CT scan image of size 452 X 374 displayed with k=8, (256 intensity
levels).
10/28/2020
6
Spatial and Intensity resolution
10/28/2020
7
Spatial and Intensity resolution
• Following images are obtained by reducing the number of bits from k=7 to k=1
while keeping the image size constant at 452 X 374 pixels.
• The 256-, 128-, and 64-level images are visually identical for all practical
purposes
10/28/2020
8
Spatial and Intensity resolution
• However, the last figure with 32- level has an imperceptible set of very
fine ridge-like structures in areas of constant or nearly constant intensity
(particularly in the skull).
• This effect, caused by the use of an insufficient number of intensity
levels in smooth areas of a digital image, is called false contouring,
• False contouring generally is quite visible in images displayed using 16
or less uniformly spaced intensity levels, as seen in the following
images
10/28/2020
9
Spatial and Intensity resolution
10/28/2020
10
Representing Digital Images
The results in previous two examples illustrate the effects produced on image
quality by varying N and k independently.
However, these results only partially answer the question of how varying N and k
affects images
Because we have not considered yet any relationships that might exist between
these two parameters.
An early study by Huang [1965] attempted to quantify experimentally the effects
on image quality produced by varying N and k simultaneously.
The experiment consisted of a set of subjective tests on images which have very
little detail, moderate detail and large amount of details
10/28/2020 11
Representing Digital Images
Sets of these three types of images were generated by varying N and k, and observers
were then asked to rank them according to their subjective quality.
Results were summarized in the form of so-called isopreference curves in the Nk-plane
10/28/2020 12
Representing Digital Images
Each point in the Nk-plane represents an image
having values of N and k equal to the coordinates of
that point.
Points lying on an isopreference curve correspond to
images of equal subjective quality.
It was found that the isopreference curves tended to
shift right and upward, but their shapes in each of the
three image categories were similar to those in Fig
Shift up and right in the curves simply means larger
values for N and k, which implies better picture
quality.
10/28/2020 13
Representing Digital Images
We can see that isopreference curves tend to become more vertical as
the detail in the image increases.
This result suggests that for images with a large amount of detail only a
few intensity levels may be needed.
 E.g.: The isopreference curve corresponding to the crowd is nearly
vertical.
This means, for a fixed value of N, the perceived quality for this type
of image is nearly independent of the number of intensity levels used
10/28/2020 14
Some Basic Relationships Between Pixels
 Here we study several important relationships between pixels in a digital image.
 We denoted an image as f(x, y).
 Hereafter to refer to a particular pixel, we use lowercase letters, such as p and q.
 Neighbors of a Pixel
 A pixel p at coordinates (x, y) has four horizontal and vertical neighbors as shown
 Their coordinates are given by (x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)
 This set of pixels, called the 4-neighbors of p, is denoted by N4(p)
10/28/2020 15
Some Basic Relationships Between Pixels
 Each pixel is a unit distance from (x, y),
 Some of the neighbor locations of p lie outside the digital image if is on the border of
the image
 The four diagonal neighbors of p have coordinates (x+1, y+1), (x+1, y-1), (x-1, y+1),
(x-1, y-1)
 These are denoted by ND(p)
 These points, together with the 4-neighbors, are called the 8-neighbors of p, denoted by
N8(p).
 Some of the neighbor locations in ND(p) and N8(p) fall outside the image if is on the
border of the image.
10/28/2020 16
Some Basic Relationships Between Pixels
• Neighbours of a pixel.
• N4 - 4-neighbors
• ND - diagonal neighbours
• N8 - 8-neighbors (N4 U ND)
10/28/2020 17
Some Basic Relationships Between Pixels
Adjacency, Connectivity, Regions, and Boundaries
Adjacency
Two pixels are connected if they are neighbors and their gray levels satisfy some
specified criterion of similarity.
For example, in a binary image two pixels are connected if they are 4-neighbors and
have same value (0/1).
Let V be the set of intensity values used to define adjacency.
In a binary image, V= {1}, if we are referring to adjacency of pixels with value 1.
In a gray-scale image, the idea is the same, but set V typically contains more elements.
For example, in the adjacency of pixels with a range of possible intensity values 0 to
255, set V could be any subset of these 256 values.
We consider three types of adjacency:
10/28/2020 1
Some Basic Relationships Between Pixels
 4-adjacency.
 Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p)
 8-adjacency.
 Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p)
 m-adjacency (mixed adjacency).
 Two pixels p and q with values from V are m-adjacent if
 (i) q is in N4(p) or
 (ii) q is in ND(p) and the set N4(p) ∩ N4(q) has no pixels whose values are from V.
10/28/2020 2
Some Basic Relationships Between Pixels
 We use the symbols ∩ and to denote set intersection and union, respectively.
 For any given sets A and B,
 their intersection is the set of elements that are members of both A and B.
 The union of these two sets is the set of elements that are members of A, of B, or of
both.
10/28/2020 3
Some Basic Relationships Between Pixels
• Connectivity
• Consider a set of gray scale values V = { 0, 1, 2}
• Let p be a pixel such that f(p) which has intensity level
• 4-connectivity:
• Consider a pixel q such that q is N4-neighbor of p and intensity of q is
• Then we say q is having 4 connectivity with p
• 8-connectivity:
• Consider a pixel q such that q is N8-neighbor of p and intensity of q is
• Then q is having 8-connectivity with p
10/28/2020 4
V
p
f 
)
(
V
q
f 
)
(
V
q
f 
)
(
Some Basic Relationships Between Pixels
• m-connectivity (mixed connectivity):
• Consider two pixels p and q such that, intensity of p and q are given by
• Pixels p and q are said to be having m-connectivity, if
i. or
ii. The set is empty
• Examples
10/28/2020 5
V
q
f 
)
(
V
p
f 
)
(
)
(
4 p
N
q
)
(
)
( 4
4 q
N
p
N 
Some Basic Relationships Between Pixels
• Path
• A (digital) path (or curve) from pixel p with coordinates (x0, y0) to pixel q with
coordinates (xn, yn) is a sequence of distinct pixels with coordinates (x0, y0), (x1, y1),
…, (xn, yn)
• where (xi, yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n.
• Here n is the length of the path.
• If (x0, y0) = (xn, yn), the path is closed path.
• We can define 4-, 8-, and m-paths based on the type of adjacency used.
10/28/2020 6
Some Basic Relationships Between Pixels
• .
10/28/2020 7
Some Basic Relationships Between Pixels
• .
10/28/2020 8
Some Basic Relationships Between Pixels
• .
10/28/2020 9
Some Basic Relationships Between Pixels
• Examples on paths.
• Example # 1: Consider the image segment shown in figure. Compute length of the
shortest-4, shortest-8 & shortest-m paths between pixels p & q where, V = {1, 2}.
4 2 3 2 q
3 3 1 3
2 3 2 2
p 2 1 2 3
10/28/2020 10
Some Basic Relationships Between Pixels
• Examples on paths.
• Example # 1: Consider the image segment shown in figure. Compute length of the
shortest-4, shortest-8 & shortest-m paths between pixels p & q where, V = {1, 2}.
4 2 3 2 q
3 3 1 3
2 3 2 2
p 2 1 2 3
10/28/2020 1
Some Basic Relationships Between Pixels
• Let us start finding the path between p and q as shown below.
• So shortest -4 path does not exist
10/28/2020 2
Some Basic Relationships Between Pixels
• Shortest-8 path.
• Thus Shortest-8 path = 4
10/28/2020 3
Some Basic Relationships Between Pixels
• Shortest-m path
• Shortest m-path =5
10/28/2020 4
Some Basic Relationships Between Pixels
• Connected in S
Let S represent a subset of pixels in an image.
Two pixels p with coordinates (x0, y0) and q with coordinates (xn, yn)
are said to be connected in S if there exists a path
(x0, y0), (x1, y1), …, (xn, yn)
where
Let S represent a subset of pixels in an image
For every pixel p in S, the set of pixels in S that are connected to p is called a
connected component of S.
If S has only one connected component, then S is called Connected Set.
10/28/2020 5
,0 ,( , )
i i
i i n x y S
   
Some Basic Relationships Between Pixels
Region of an image
Let R be a subset of pixels in an image
We call R a region of the image, if R is a connected set
Two regions, Ri and Rj are said to be adjacent if their union forms a
connected set.
Regions that are not to be adjacent are said to be disjoint.
We consider 4- and 8- adjacency when referring to regions.
10/28/2020 6
Some Basic Relationships Between Pixels
• In the above image, two regions are adjacent if 8 adjacency is used.
10/28/2020 7
Some Basic Relationships Between Pixels
• Foreground and background of image
• Suppose that an image contains K disjoint regions, Rk, k= 1,2,3..K with
none of them touching the image border.
• Let Ru denote the union of all the K regions, and let (Ru)c denote its
complement
• The complement of a set S is the set of points that are not in S.
• We call all the points in Ru the foreground, and all the points in (Ru)c the
background of the image.
10/28/2020 8
Some Basic Relationships Between Pixels
• The boundary (border or contour) of a region R
• The boundary of a region R is the set of pixels in the region that have one or more
neighbors that are not in R.
• If R happens to be an entire image, then its boundary is defined as the set of pixels
in the first and last rows and columns in the image.
• This extra definition is required because an image has no neighbors beyond its
borders
• Normally, when we refer to a region, we are referring to subset of an image, and
any pixels in the boundary of the region that happen to coincide with the border of
the image are included implicitly as part of the region boundary.
10/28/2020 9
Some Basic Relationships Between Pixels
• Distance Measures
• Consider three pixels p, q, and z, with coordinates (x, y), (s, t), and (v, w), respectively
• The distance between any two points can be calculated as distance metric or distance
measure
• But it must satisfy the following properties
• The Euclidean distance between p and q is defined as
• De (p,q) = [(x – s)2 + (y - t)2]1/2
10/28/2020 10
p (x,y)
q (s,t)
Some Basic Relationships Between Pixels
• City-block distance
• The D4 distance (also called city-block distance) between p and q is defined as:
D4 (p,q) = | x – s | + | y – t |
• Example:
• The pixels with distance D4 ≤ 2 from (x,y) form the following contours of constant
distance.
• The pixels with D4 = 1 are the 4-neighbors of (x,y)
10/28/2020 11
p (x,y)
q (s,t)
D4
Some Basic Relationships Between Pixels
• Pixels having a D4 distance from (x,y), less than or equal to some value r form a Diamond
centered at (x,y)
• In case of Euclidean Distance, Pixels having a distance less than or equal to some value r
from (x,y) are the points contained in a disk of radius r centered at (x,y)
10/28/2020 12
Some Basic Relationships Between Pixels
• Distance Measures.
• Chessboard Distance (D8 distance):
• The D8 distance between p,q is defined as D8(p, q) = max(|x-s|, |y-t|)
• In this case, pixels having D8 distance from (x, y) less than or equal to
some value r form a square centered at ( x, y).
10/28/2020 13
Some Basic Relationships Between Pixels
• Applications of Image Processing:
• Medical imaging,
• Robot vision,
• Character recognition,
• Remote Sensing
10/28/2020 14

DIGITAL image processing for 6th sem students

  • 1.
  • 2.
    DIGITAL IMAGE PROCESSING SubjectCode 15CS753 IA Marks 20 Number of Lecture Hours/Week 3 Exam Marks 80 Total Number of Lecture Hours 40 Exam Hours 03 CREDITS – 03 10/28/2020 2
  • 3.
    DIGITAL IMAGE PROCESSING ■Text Books: ■ 1. Rafael C G., Woods R E. and Eddins S L, Digital Image Processing, Prentice Hall, 3rd Edition, 2008. ■ Reference Books: 1. Milan Sonka, “Image Processing, analysis and Machine Vision”, Thomson Press India Ltd, Fourth Edition. 2. Fundamentals of Digital Image Processing- Anil K. Jain, 2nd Edition, Prentice Hall of India. 3. S. Sridhar, Digital Image Processing, Oxford University Press, 2nd Ed, 2016. 10/28/2020 3
  • 4.
    DIGITAL IMAGE PROCESSING ■Module – 1 8 Hours ■ Introduction Fundamental Steps in Digital Image Processing, ■ Components of an Image Processing System, ■ Sampling and Quantization, ■ Representing Digital Images (Data structure), ■ Some Basic Relationships Between Pixels- Neighbors and Connectivity of pixels in image, ■ Applications of Image Processing: – Medical imaging, – Robot vision, – Character recognition, – Remote Sensing 10/28/2020 4
  • 5.
    Image processing ■ Signalprocessing is a discipline that deals with analysis and processing of analog and digital signals and deals with storing , filtering , and other operations on signals. ■ These signals include transmission signals , sound or voice signals , image signals , and other signals. ■ The field that deals with the type of signals, for which the input is an image and the output is also an image is image processing. ■ As name suggests, it deals with the processing on images. 10/28/2020 5
  • 6.
    Image processing ■ Whatis an Image ?? ■ An image may be defined as a two-dimensional function, f(x,y), – where x and y are spatial (plane) coordinates, and – the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. ■ When x, y, and the intensity values of f are all finite, discrete quantities, we call the image a digital image. ■ The field of digital image processing refers to processing digital images by means of a digital computer. ■ A digital image is composed of a finite number of elements, each of which has a particular location and value. ■ These elements are called picture elements, or pixels. 10/28/2020 6
  • 7.
  • 8.
    Image processing ■ Imagesplay the most important role in human perception. ■ However, we human beings, are limited to the visual band of the electromagnetic (EM) spectrum, but imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. ■ They can operate on images generated by sources that humans are not accustomed to associating with images. ■ These include ultrasound, electron microscopy, and computer-generated images. ■ Thus, digital image processing covers a wide and varied field of applications 10/28/2020 1
  • 9.
    Image processing ■ Thereis no fixed boundary regarding where image processing stops and other related areas, such as image analysis and computer vision, start. ■ Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. ■ But this will be a limiting and artificial boundary. – Even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. ■ On the other hand, the fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. 10/28/2020 2
  • 10.
    Image processing ■ Thereare no clear-cut boundaries image processing and computer vision. ■ But we may consider three types of computerized processes low-, mid-, and high-level processes. ■ Low-level processes involve primitive operations such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. ■ Here both inputs and outputs are images. ■ Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. ■ In this, inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). 10/28/2020 3
  • 11.
    Image processing ■ Higher-levelprocessing involves “making sense” of an ensemble of recognized objects, as in image analysis, and performing the cognitive functions normally associated with vision. ■ Here we can see that a logical overlap between image processing and image analysis exits and that is the area of recognition of individual regions or objects in an image. ■ Thus, we can say digital image processing includes processes whose inputs and outputs are images and, also processes that extract attributes from images, up to and including the recognition of individual objects 10/28/2020 4
  • 12.
    Image processing ■ Toclarify these concepts, Lets see example of OCR. ■ This involves ■ The processes of acquiring an image of the area containing the text, ■ Preprocessing that image, ■ Extracting (segmenting) the individual characters, ■ Describing the characters in a form suitable for computer processing, and ■ Recognizing those individual characters are in the scope of DIP 10/28/2020 5
  • 13.
    Image processing ■ Makingsense of the content of the page may be seen as a part of image analysis and even computer vision, depending on the level of complexity implied by the statement “making sense.” 10/28/2020 6
  • 14.
    Fundamental Steps inDigital Image Processing ■ We saw two broad categories of image processing – methods whose input and output are images, and – methods whose inputs may be images but whose outputs are attributes extracted from those images. ■ This organization is summarized in following Fig. 10/28/2020 7
  • 15.
  • 16.
    Image processing ■ Thediagram does not imply that every process is applied to an image. ■ It gives an idea of all the methodologies that can be applied to images for different purposes and possibly with different objectives. ■ Let us briefly discuss the overview of each of these methods ■ Image acquisition ■ The first process in Fig. ■ We see more methods of image acquisition in subsequent sections ■ Acquisition could be as simple as being given an image that is already in digital form. 10/28/2020 9
  • 17.
    Image processing ■ Imageenhancement: ■ The process of manipulating an image so that the result is more suitable than the original for a specific application. ■ The word “specific” establishes at the outset that enhancement techniques are problem oriented. ■ Thus, for example, a method that is quite useful for enhancing X-ray images may not be the best approach for enhancing satellite images taken in the infrared band of the electromagnetic spectrum. ■ This process enhances the quality. ■ As an output of image enhancement, certain parts of the image will be highlighted while it will remove or blur the unnecessary parts of the image. ■ Image restoration: ■ This also deals with improving the appearance of an image but it is objective, ■ That is, restoration techniques mostly based on mathematical or probabilistic models of image degradation. 10/28/2020 1
  • 18.
    Image processing ■ ButEnhancement, is based on human subjective preferences regarding what constitutes a “good” enhancement result. ■ Color image processing ■ An area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. ■ Wavelets: ■ These are the foundation for representing images in various degrees of resolution. ■ In particular, this is used for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions 10/28/2020 2
  • 19.
    Image processing ■ Compression: ■Deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it. ■ Although storage technology has improved over the years, the transmission capacity might not have been improved. ■ Image compression is familiar to most users of computers in the form of image file extensions, such as the jpg file extension used in the ■ JPEG (Joint Photographic Experts Group) image compression standard. 10/28/2020 3
  • 20.
    Image processing ■ Morphologicalprocessing ■ deals with tools for extracting image components that are useful in the representation and description of shape. ■ Segmentation procedures partition an image into its constituent parts or objects. ■ Representation and description -usually follows the output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. ■ In either case, the data has to be converted to a form suitable for computer processing 10/28/2020 4
  • 21.
    Image processing ■ Recognitionis the process that assigns a label (e.g., “vehicle”) to an object based on its descriptors. ■ This is the last stage of digital image processing where the methods for recognition of individual objects is developed ■ Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. ■ This knowledge may be as simple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. 10/28/2020 5
  • 22.
    Image processing ■ :Theknowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in connection with change-detection applications. ■ In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. ■ This distinction is made in Fig by means of use of double-headed arrows between the processing modules and the knowledge base, as opposed to single-headed arrows linking the processing modules. 10/28/2020 6
  • 23.
    Components of anImage Processing System 10/28/2020 7
  • 24.
    Components of anImage Processing System ■ Image sensing: ■ Two elements required for acquiring digital image: – A physical device that is sensitive to the energy radiated by the object we wish to image. – The second, called a digitizer, is a device for converting the output of the physical sensing device into digital form ■ E.g.: ■ In a digital video camera, the sensors produce an electrical output proportional to light intensity. ■ The digitizer converts these outputs to digital data. 10/28/2020 8
  • 25.
    Components of anImage Processing System ■ Specialized image processing hardware: ■ Consists of the digitizer plus hardware that performs other primitive operations, such as an ALU, that performs arithmetic and logical operations in parallel on entire images. ■ This type of hardware is also called a front-end subsystem, and its most distinguishing characteristic is speed. ■ This unit performs functions that require fast data throughputs that the typical main computer cannot handle. ■ The computer ■ This could be is a general-purpose computer ranging from a PC to a supercomputer. ■ For any general-purpose image processing systems, any well-equipped PC- type machine is suitable for off-line image processing tasks. 10/28/2020 9
  • 26.
    Components of anImage Processing System ■ Image processing Software: ■ consists of specialized modules that perform specific tasks. ■ A well-designed package facilitates the user to write minimum code that, utilizes the available specialized modules. ■ More sophisticated software packages allow the integration of those modules and general-purpose software commands from at least one computer language. 10/28/2020 10
  • 27.
    Components of anImage Processing System ■ Mass storage: ■ Is a must in image processing applications. ■ W.k.t. Storage is measured in bytes, Kbytes, Mbytes, Gbytes and Tbytes ■ An image of size 1024 X 1024 pixels, in which the intensity of each pixel is an 8-bit quantity, requires 1MB of storage space if the image is not compressed. ■ When dealing with large number of images, providing adequate storage in an image processing system can be a challenge. ■ Digital storage for image processing applications are of three categories: – Short-term storage for use during processing, – On-line storage for relatively fast recall, and – Archival storage, characterized by infrequent access. 10/28/2020 11
  • 28.
    Components of anImage Processing System ■ One method of providing Short-term storage is computer memory. ■ Another is by specialized boards, called frame buffers, that store one or more images and can be accessed rapidly, usually at video rates (e.g., at 30 complete images per second). ■ Frame buffers are located in the specialized image processing hardware unit (as shown in Fig.) ■ On-line storage generally takes the form of magnetic disks or optical-media storage. ■ Finally, archival storage is characterized by massive storage requirements but infrequent need for access. ■ Magnetic tapes and optical disks housed in “jukeboxes” are the usual media for archival applications. 10/28/2020 12
  • 29.
    Components of anImage Processing System ■ Image displays used today are mainly color TV monitors. ■ Monitors are driven by the outputs of image and graphics display cards that are an integral part of the computer system. ■ In some cases, it is necessary to have stereo displays, and these are implemented in the form of headgear containing two small displays embedded in goggles worn by the user. ■ Hardcopy devices are used for recording images ■ E.g.: laser printers, film cameras, heat-sensitive devices, inkjet units, and digital units, such as optical and CDROM disks. 10/28/2020 13
  • 30.
    Components of anImage Processing System ■ Networking is a default function in any computer system in use today. ■ Because of the large amount of data inherent in image processing applications, main issue is the bandwidth for image transmission ■ In dedicated networks, this is not a problem, but communications with remote sites via the Internet are not always as efficient. ■ Fortunately, this situation is improving quickly as a result of optical fiber and other broadband technologies 10/28/2020 14
  • 31.
    Sampling and Quantization ■Before proceeding with this step we must have acquired the image of interest using any of the available methods ■ The output of most sensors is a continuous voltage waveform whose amplitude and spatial behavior are related to the physical phenomenon being sensed. ■ To create a digital image, we need to convert the continuous sensed data into digital form. ■ This involves two processes: sampling and quantization. 10/28/2020 15
  • 32.
    Sampling and Quantization ■Basic Concepts ■ An image may be continuous with respect to the x- and y-coordinates, ■ and also in amplitude ■ Let us say figure below shows a continuous image f that we want to convert to digital form 10/28/2020 16
  • 33.
    Sampling and Quantization ■To convert it to digital form, we have to sample the function in both coordinates and in amplitude. ■ Digitizing the coordinate values is called sampling. ■ Digitizing the amplitude values is called quantization. ■ The plot of amplitude (intensity level) values of the continuous image f along the line segment AB is shown below. ■ The random variations are due to image noise 10/28/2020 17
  • 34.
    Sampling and Quantization ■To sample this function, we take equally spaced samples along line AB, as shown in Fig. below. ■ The samples are shown as small white squares superimposed on the function. ■ The spatial location of each sample is indicated by a vertical tick mark in the bottom part of the figure. 10/28/2020 18
  • 35.
    Sampling and Quantization ■The set of these discrete locations gives the sampled function. ■ The values of the samples still span (vertically) a continuous range of intensity values. ■ In order to form a digital function, the intensity values also must be converted (quantized) into discrete quantities. ■ The right side of Fig. shows the intensity scale divided into eight discrete intervals, ranging from black to white. ■ The vertical tick marks indicate the specific value assigned to each of the eight intensity intervals. ■ The continuous intensity levels are quantized by assigning one of the eight values to each sample. ■ The assignment is made depending on the vertical proximity of a sample to a vertical tick mark. 10/28/2020 19
  • 36.
    Sampling and Quantization ■The digital samples resulting from both sampling and quantization are shown in Fig. ■ Starting at the top of the image and carrying out this procedure line by line produces a two-dimensional digital image. ■ It is implied in Fig. that, in addition to the number of discrete levels used, the accuracy achieved in quantization is highly dependent on the noise content of the sampled signal 10/28/2020 20
  • 37.
    Sampling and Quantization •This is used to convert the acquired image into digital form. • Before applying this step we must have acquired the image of interest using any of the available methods • The output of most sensors is a continuous voltage waveform whose amplitude and spatial behavior are related to the physical phenomenon being sensed. • To create a digital image, we need to convert the continuous sensed data into digital form. • This involves two processes: sampling and quantization. 10/28/2020 1
  • 38.
    Sampling and Quantization •Basic Concepts • An image may be continuous with respect to the x- and y-coordinates, and also in amplitude • Let us say figure below shows a continuous image f that we want to convert to digital form 10/28/2020 2
  • 39.
    Sampling and Quantization •To convert it to digital form, we have to sample the function in both coordinates and in amplitude. • Digitizing the coordinate values is called sampling. • Digitizing the amplitude values is called quantization. • The plot of amplitude (intensity level) values of the continuous image f along the line segment AB is shown below. • The random variations are due to image noise 10/28/2020 3
  • 40.
    Sampling and Quantization •To sample this function, we take equally spaced samples along line AB, as shown in Fig. below. • The samples are shown as small white squares superimposed on the function. • The spatial location of each sample is indicated by a vertical tick mark in the bottom part of the figure. 10/28/2020 4
  • 41.
    Sampling and Quantization •The set of these discrete locations gives the sampled function. • The values of the samples still span (vertically) a continuous range of intensity values. • In order to form a digital function, the intensity values also must be converted (quantized) into discrete quantities. • The right side of Fig. shows the intensity scale divided into eight discrete intervals, ranging from black to white. • The vertical tick marks indicate the specific value assigned to each of the eight intensity intervals. • The continuous intensity levels are quantized by assigning one of the eight values to each sample. • The assignment is made depending on the vertical proximity of a sample to a vertical tick mark. 10/28/2020 5
  • 42.
    Sampling and Quantization •The digital samples resulting from both sampling and quantization are shown in Fig. • Starting at the top of the image and carrying out this procedure line by line produces a two-dimensional digital image. • It is implied in Fig. that, in addition to the number of discrete levels used, the accuracy achieved in quantization is highly dependent on the noise content of the sampled signal 10/28/2020 6
  • 43.
    Representing Digital Images •Let f(s, t) represent a continuous image function of two continuous variables, s and t. • We convert this function into a digital image by sampling and quantization. • Suppose that we sample the continuous image into a 2-D array, f(x, y), containing M rows and N columns, where (x, y) are discrete coordinates. • We use integer values for these discrete coordinates: x = 0, 1, 2, ..M-1 and y = 0, 1,.. N-1. • Thus, the value of the digital image at the origin is f(0, 0), and the next coordinate value along the first row is f(0, 1). • Notation (0, 1) implies the second sample along the first row. • In general, the value of the image at any coordinates (x, y) is denoted f(x,y), where x and y are integers. 10/28/2020 7
  • 44.
    Representing Digital Images •Three basic ways to represent f(x, y): • First method is a plot of the function f(x, y), with two axes determining spatial location and the third axis being the values of f (intensities) as a function of the two spatial variables x and y as shown below • This representation is useful when working with gray-scale sets whose elements are expressed as triplets of the form (x, y, z), where x and y are spatial coordinates and z is the value of f at coordinates (x, y) 10/28/2020 8
  • 45.
    Representing Digital Images •Second method is as shown below • This is much more common. • Here, the intensity of each point is proportional to the value of f at that point. • In this figure, there are only three equally spaced intensity values. • If the intensity is normalized to the interval [0, 1], then each point in the image has the value 0, 0.5, or 1. • A monitor or printer simply converts these three values to black, gray, or white, respectively 10/28/2020 9
  • 46.
    Representing Digital Images •The third representation is to display the numerical values of f(x, y) as an array (matrix). • In this example, f is of size 600 X 600 elements, or 360,000 numbers. • The last two representations are the most useful. • Image displays allow us to view results at a glance and Numerical arrays are used for processing and algorithm development. 10/28/2020 10
  • 47.
    Representing Digital Images •The third representation is to display the numerical values of f(x, y) as an array (matrix). • In this example, f is of size 600 X 600 elements, or 360,000 numbers. • The last two representations are the most useful. • Image displays allow us to view results at a glance and Numerical arrays are used for processing and algorithm development. 10/28/2020 1
  • 48.
    Sampling and Quantization •We can write the representation of an M X N numerical array in equation form as • Both sides of this equation are equivalent ways of expressing a digital image • The right side is a matrix of real numbers. • Each element of this matrix is called an image element, picture element, pixel, or pel. • We stick on to the terms image and pixel to denote a digital image and its elements respectively. 10/28/2020 2
  • 49.
    Sampling and Quantization •It is also advantageous to use traditional matrix notation to denote a digital image and its elements as shown below • Here any aij = f(x=i, y=j) = f(i,j) • We can represent an image as a vector, v. • For example, a column vector of size MN X 1 is formed by letting the first M elements of v be the first column of A, the next M elements be the second column, and so on. • Alternatively, we can use the rows instead of the columns of A to form such a vector. • Either representation is valid, as long as we are consistent. 10/28/2020 3
  • 50.
    Sampling and Quantization •In the figure below we see that the origin of a digital image is at the top left, with the positive x-axis extending downward and the positive y- axis extending to the right. • This is a conventional representation based on the fact that many image displays (e.g., TV monitors) sweep an image starting at the top left and moving to the right one row at a time. 10/28/2020 4
  • 51.
    Sampling and Quantization •Mathematically, the first element of a matrix is by convention at the top left of the array, so choosing the origin of at that point makes sense. • We henceforth show the axes pointing downward and to the right, instead of to the right and up. • The digitization process requires that decisions be made regarding the values for M, N, and for the number, L, of discrete intensity levels. • There are no restrictions placed on M and N, other than they have to be positive integers. • Due to storage and quantizing hardware considerations, the number of intensity levels typically is an integer power of 2, L = 2k 10/28/2020 5
  • 52.
    Sampling and Quantization •We assume that the discrete levels are equally spaced and that they are integers in the interval [0,L-1] • Sometimes, the range of values spanned by the gray scale is referred to informally as the dynamic range. • This is a term used in different ways in different fields. • We define the dynamic range of an imaging system to be the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system. • As a rule, the upper limit is determined by saturation and the lower limit by noise 10/28/2020 6
  • 53.
    Sampling and Quantization •Saturation is the highest value beyond which all intensity levels are clipped • Noise in this case appears as a grainy texture pattern. • Noise, especially in the darker regions of an image (e.g., the stem of the rose) masks the lowest detectable true intensity level 10/28/2020 7
  • 54.
  • 55.
    Sampling and Quantization •Dynamic range of an imaging system is defined as the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system. • As a rule, the upper limit is determined by saturation and the lower limit by noise • Dynamic range establishes the lowest and highest intensity levels that a system can represent and, consequently, that an image can have. • Image contrast is closely associated with this, which is defined as the difference in intensity between the highest and lowest intensity levels in an image. 10/28/2020 9
  • 56.
    Sampling and Quantization •The number of bits required to store a digitized image is given by • b = M *N * k • When M = N this equation becomes • b = N2*k • Following table shows the number of bits required to store square images with various values of N and k. 10/28/2020 10
  • 57.
    Sampling and Quantization •An image of 2k intensity levels, is referred as a “k-bit image.” • E.g.: an image with 256 possible discrete intensity values is called an 8- bit image. 10/28/2020 11
  • 58.
    Sampling and Quantization •We can see that, 8-bit images of size 1024X1024 and higher are demanding for significant storage requirements • Spatial resolution and Intensity resolution: • We know that, pixel is the smallest element of an image and it can store a value proportional to the light intensity at that particular location. • Resolution is a common term used with images • The resolution can be defined in many ways. • Pixel resolution, • Spatial resolution, • Temporal resolution, • Spectral resolution. • Let us see pixel resolution first 10/28/2020 12
  • 59.
    Sampling and Quantization •You might have seen that in PC settings, monitor resolution of 800 x 600, 640 x 480 etc • In pixel resolution, the term resolution refers to the total number of count of pixels in an digital image. • If an image has M rows and N columns, then its resolution can be defined as M X N. • If we define resolution as the total number of pixels, then pixel resolution can be defined with set of two numbers. • The first number, the width of the picture, or the pixels across columns, and the second number is height of the picture, or the pixels across its width. • We can say that the higher is the pixel resolution, the higher is the quality of the image. • We can define pixel resolution of an image as 4500 X 5500 10/28/2020 13
  • 60.
    Spatial and Intensityresolution • Spatial resolution • Spatial resolution can be defined as the smallest observable detail in an image. • Can be defined in several ways • Dots (pixels) per unit distance, • Line pairs per unit distance • spatial resolution means that we cannot compare two different types of images to see that which one is clear or which one is not. • If we have to compare the two images, to see which one is more clear or which has more spatial resolution, we have to compare two images of the same size. 10/28/2020 1
  • 61.
    Spatial and Intensityresolution • Both the pictures has same dimensions which are of 227 X 222. • Now when we compare them, we see that the picture on the left side has more spatial resolution or it is more clear then the picture on the right side which is a blurred image. 10/28/2020 2
  • 62.
    Spatial and Intensityresolution • Measuring spatial resolution • Since the spatial resolution refers to clarity, so for different devices, different measure has been made to measure it. • Dots per inch (DPI) - usually used in monitors. • Lines per inch (LPI) - usually used in laser printers. • Pixels per inch (PPI) is measure for different devices such as tablets , Mobile phones e.t.c. • Let us see the effects of reducing spatial resolution in an image. • The images in Figs. (a) through (d) are shown in 1250, 300, 150, and 72 dpi, respectively. 10/28/2020 3
  • 63.
  • 64.
    Spatial and Intensityresolution • As spatial resolution reduces, image size also reduces. • Image (a) was of size 3692 X 2812 pixels while image (d) was if 213 X 162 • To compare the images, smaller images are brought back to the original size by zooming them. • Images (a) and (b) there are no much differences • But (c) and (d) show significant degradation 10/28/2020 5
  • 65.
    Spatial and Intensityresolution • Intensity resolution: • Refers to the smallest observable change in the intensity level • It is defined as the number of bits used to quantize the intensity • We have seen that, the number of intensity levels choses is usually power of 2 • Most commonly used number is 8 bits • E.g.: an image with intensity quantized to 256 levels is said to have 8-bit intensity resolution • Effects of varying the number of intensity levels in a digital image. • Consider a CT scan image of size 452 X 374 displayed with k=8, (256 intensity levels). 10/28/2020 6
  • 66.
    Spatial and Intensityresolution 10/28/2020 7
  • 67.
    Spatial and Intensityresolution • Following images are obtained by reducing the number of bits from k=7 to k=1 while keeping the image size constant at 452 X 374 pixels. • The 256-, 128-, and 64-level images are visually identical for all practical purposes 10/28/2020 8
  • 68.
    Spatial and Intensityresolution • However, the last figure with 32- level has an imperceptible set of very fine ridge-like structures in areas of constant or nearly constant intensity (particularly in the skull). • This effect, caused by the use of an insufficient number of intensity levels in smooth areas of a digital image, is called false contouring, • False contouring generally is quite visible in images displayed using 16 or less uniformly spaced intensity levels, as seen in the following images 10/28/2020 9
  • 69.
    Spatial and Intensityresolution 10/28/2020 10
  • 70.
    Representing Digital Images Theresults in previous two examples illustrate the effects produced on image quality by varying N and k independently. However, these results only partially answer the question of how varying N and k affects images Because we have not considered yet any relationships that might exist between these two parameters. An early study by Huang [1965] attempted to quantify experimentally the effects on image quality produced by varying N and k simultaneously. The experiment consisted of a set of subjective tests on images which have very little detail, moderate detail and large amount of details 10/28/2020 11
  • 71.
    Representing Digital Images Setsof these three types of images were generated by varying N and k, and observers were then asked to rank them according to their subjective quality. Results were summarized in the form of so-called isopreference curves in the Nk-plane 10/28/2020 12
  • 72.
    Representing Digital Images Eachpoint in the Nk-plane represents an image having values of N and k equal to the coordinates of that point. Points lying on an isopreference curve correspond to images of equal subjective quality. It was found that the isopreference curves tended to shift right and upward, but their shapes in each of the three image categories were similar to those in Fig Shift up and right in the curves simply means larger values for N and k, which implies better picture quality. 10/28/2020 13
  • 73.
    Representing Digital Images Wecan see that isopreference curves tend to become more vertical as the detail in the image increases. This result suggests that for images with a large amount of detail only a few intensity levels may be needed.  E.g.: The isopreference curve corresponding to the crowd is nearly vertical. This means, for a fixed value of N, the perceived quality for this type of image is nearly independent of the number of intensity levels used 10/28/2020 14
  • 74.
    Some Basic RelationshipsBetween Pixels  Here we study several important relationships between pixels in a digital image.  We denoted an image as f(x, y).  Hereafter to refer to a particular pixel, we use lowercase letters, such as p and q.  Neighbors of a Pixel  A pixel p at coordinates (x, y) has four horizontal and vertical neighbors as shown  Their coordinates are given by (x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)  This set of pixels, called the 4-neighbors of p, is denoted by N4(p) 10/28/2020 15
  • 75.
    Some Basic RelationshipsBetween Pixels  Each pixel is a unit distance from (x, y),  Some of the neighbor locations of p lie outside the digital image if is on the border of the image  The four diagonal neighbors of p have coordinates (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)  These are denoted by ND(p)  These points, together with the 4-neighbors, are called the 8-neighbors of p, denoted by N8(p).  Some of the neighbor locations in ND(p) and N8(p) fall outside the image if is on the border of the image. 10/28/2020 16
  • 76.
    Some Basic RelationshipsBetween Pixels • Neighbours of a pixel. • N4 - 4-neighbors • ND - diagonal neighbours • N8 - 8-neighbors (N4 U ND) 10/28/2020 17
  • 77.
    Some Basic RelationshipsBetween Pixels Adjacency, Connectivity, Regions, and Boundaries Adjacency Two pixels are connected if they are neighbors and their gray levels satisfy some specified criterion of similarity. For example, in a binary image two pixels are connected if they are 4-neighbors and have same value (0/1). Let V be the set of intensity values used to define adjacency. In a binary image, V= {1}, if we are referring to adjacency of pixels with value 1. In a gray-scale image, the idea is the same, but set V typically contains more elements. For example, in the adjacency of pixels with a range of possible intensity values 0 to 255, set V could be any subset of these 256 values. We consider three types of adjacency: 10/28/2020 1
  • 78.
    Some Basic RelationshipsBetween Pixels  4-adjacency.  Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p)  8-adjacency.  Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p)  m-adjacency (mixed adjacency).  Two pixels p and q with values from V are m-adjacent if  (i) q is in N4(p) or  (ii) q is in ND(p) and the set N4(p) ∩ N4(q) has no pixels whose values are from V. 10/28/2020 2
  • 79.
    Some Basic RelationshipsBetween Pixels  We use the symbols ∩ and to denote set intersection and union, respectively.  For any given sets A and B,  their intersection is the set of elements that are members of both A and B.  The union of these two sets is the set of elements that are members of A, of B, or of both. 10/28/2020 3
  • 80.
    Some Basic RelationshipsBetween Pixels • Connectivity • Consider a set of gray scale values V = { 0, 1, 2} • Let p be a pixel such that f(p) which has intensity level • 4-connectivity: • Consider a pixel q such that q is N4-neighbor of p and intensity of q is • Then we say q is having 4 connectivity with p • 8-connectivity: • Consider a pixel q such that q is N8-neighbor of p and intensity of q is • Then q is having 8-connectivity with p 10/28/2020 4 V p f  ) ( V q f  ) ( V q f  ) (
  • 81.
    Some Basic RelationshipsBetween Pixels • m-connectivity (mixed connectivity): • Consider two pixels p and q such that, intensity of p and q are given by • Pixels p and q are said to be having m-connectivity, if i. or ii. The set is empty • Examples 10/28/2020 5 V q f  ) ( V p f  ) ( ) ( 4 p N q ) ( ) ( 4 4 q N p N 
  • 82.
    Some Basic RelationshipsBetween Pixels • Path • A (digital) path (or curve) from pixel p with coordinates (x0, y0) to pixel q with coordinates (xn, yn) is a sequence of distinct pixels with coordinates (x0, y0), (x1, y1), …, (xn, yn) • where (xi, yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n. • Here n is the length of the path. • If (x0, y0) = (xn, yn), the path is closed path. • We can define 4-, 8-, and m-paths based on the type of adjacency used. 10/28/2020 6
  • 83.
    Some Basic RelationshipsBetween Pixels • . 10/28/2020 7
  • 84.
    Some Basic RelationshipsBetween Pixels • . 10/28/2020 8
  • 85.
    Some Basic RelationshipsBetween Pixels • . 10/28/2020 9
  • 86.
    Some Basic RelationshipsBetween Pixels • Examples on paths. • Example # 1: Consider the image segment shown in figure. Compute length of the shortest-4, shortest-8 & shortest-m paths between pixels p & q where, V = {1, 2}. 4 2 3 2 q 3 3 1 3 2 3 2 2 p 2 1 2 3 10/28/2020 10
  • 87.
    Some Basic RelationshipsBetween Pixels • Examples on paths. • Example # 1: Consider the image segment shown in figure. Compute length of the shortest-4, shortest-8 & shortest-m paths between pixels p & q where, V = {1, 2}. 4 2 3 2 q 3 3 1 3 2 3 2 2 p 2 1 2 3 10/28/2020 1
  • 88.
    Some Basic RelationshipsBetween Pixels • Let us start finding the path between p and q as shown below. • So shortest -4 path does not exist 10/28/2020 2
  • 89.
    Some Basic RelationshipsBetween Pixels • Shortest-8 path. • Thus Shortest-8 path = 4 10/28/2020 3
  • 90.
    Some Basic RelationshipsBetween Pixels • Shortest-m path • Shortest m-path =5 10/28/2020 4
  • 91.
    Some Basic RelationshipsBetween Pixels • Connected in S Let S represent a subset of pixels in an image. Two pixels p with coordinates (x0, y0) and q with coordinates (xn, yn) are said to be connected in S if there exists a path (x0, y0), (x1, y1), …, (xn, yn) where Let S represent a subset of pixels in an image For every pixel p in S, the set of pixels in S that are connected to p is called a connected component of S. If S has only one connected component, then S is called Connected Set. 10/28/2020 5 ,0 ,( , ) i i i i n x y S    
  • 92.
    Some Basic RelationshipsBetween Pixels Region of an image Let R be a subset of pixels in an image We call R a region of the image, if R is a connected set Two regions, Ri and Rj are said to be adjacent if their union forms a connected set. Regions that are not to be adjacent are said to be disjoint. We consider 4- and 8- adjacency when referring to regions. 10/28/2020 6
  • 93.
    Some Basic RelationshipsBetween Pixels • In the above image, two regions are adjacent if 8 adjacency is used. 10/28/2020 7
  • 94.
    Some Basic RelationshipsBetween Pixels • Foreground and background of image • Suppose that an image contains K disjoint regions, Rk, k= 1,2,3..K with none of them touching the image border. • Let Ru denote the union of all the K regions, and let (Ru)c denote its complement • The complement of a set S is the set of points that are not in S. • We call all the points in Ru the foreground, and all the points in (Ru)c the background of the image. 10/28/2020 8
  • 95.
    Some Basic RelationshipsBetween Pixels • The boundary (border or contour) of a region R • The boundary of a region R is the set of pixels in the region that have one or more neighbors that are not in R. • If R happens to be an entire image, then its boundary is defined as the set of pixels in the first and last rows and columns in the image. • This extra definition is required because an image has no neighbors beyond its borders • Normally, when we refer to a region, we are referring to subset of an image, and any pixels in the boundary of the region that happen to coincide with the border of the image are included implicitly as part of the region boundary. 10/28/2020 9
  • 96.
    Some Basic RelationshipsBetween Pixels • Distance Measures • Consider three pixels p, q, and z, with coordinates (x, y), (s, t), and (v, w), respectively • The distance between any two points can be calculated as distance metric or distance measure • But it must satisfy the following properties • The Euclidean distance between p and q is defined as • De (p,q) = [(x – s)2 + (y - t)2]1/2 10/28/2020 10 p (x,y) q (s,t)
  • 97.
    Some Basic RelationshipsBetween Pixels • City-block distance • The D4 distance (also called city-block distance) between p and q is defined as: D4 (p,q) = | x – s | + | y – t | • Example: • The pixels with distance D4 ≤ 2 from (x,y) form the following contours of constant distance. • The pixels with D4 = 1 are the 4-neighbors of (x,y) 10/28/2020 11 p (x,y) q (s,t) D4
  • 98.
    Some Basic RelationshipsBetween Pixels • Pixels having a D4 distance from (x,y), less than or equal to some value r form a Diamond centered at (x,y) • In case of Euclidean Distance, Pixels having a distance less than or equal to some value r from (x,y) are the points contained in a disk of radius r centered at (x,y) 10/28/2020 12
  • 99.
    Some Basic RelationshipsBetween Pixels • Distance Measures. • Chessboard Distance (D8 distance): • The D8 distance between p,q is defined as D8(p, q) = max(|x-s|, |y-t|) • In this case, pixels having D8 distance from (x, y) less than or equal to some value r form a square centered at ( x, y). 10/28/2020 13
  • 100.
    Some Basic RelationshipsBetween Pixels • Applications of Image Processing: • Medical imaging, • Robot vision, • Character recognition, • Remote Sensing 10/28/2020 14