3. 3.1 Review
Geometric transformation vs Intensity transformation
Spatial domain
The value at the
corresponding position
of the image does not
change, but the pixel
position changes
The pixel position in
the image does not
change, but the value
changes
5. 3.2 The key points and difficulties of this class
Be familiar with the principal techniques used for intensity transformations
Learn basic log transformations and power-law transformations
Understand the realization process of the two transformations
15. Discussion
What are the advantages and disadvantages
of these transformations?
Trial and error
A certain basis
intensity distribution
peaks and valleys
Discuss the pros and cons of these methods:
Reasonable or not
Automatic degree
Robustness
16. 3.3 Intensity Transformation
Trial and error
A certain basis
intensity distribution
Discussion
You may ask, to achieve this result, I can directly use PS, what is the meaning of
learning these transformations?
We know how to use PS to achieve effects more
quickly and accurately without a lot of trying
We can perform different transformations in
different regions
We can perform different transformations on
different grayscale ranges
20. 3.2 Histogram Processing
Image gray histogram
No spatial information involved
The same histogram distribution may
correspond to different images
Information additivity
Related to the amount of information
21. Describe image with gray histogram
The grayscale of the image is concentrated
in the brighter area, and a considerable part
of them are concentrated in the part close to
1, resulting in overexposure of the image
The pixel distribution in the image
is “polarized”, resulting in the loss
of image details
The distribution of image histogram is related to the quality of image to some extent
3.4 Histogram equalization
22. A “clear” image
The histogram reflects the clarity of the image, when it is evenly distributed, the image is “clearer”
Histogram
equalization
each gray level should have a
certain number of gray values
Different objects should have
distinguishable grayscale variations
3.4 Histogram equalization
24. original image target image
For a random
distribution transform
to uniform distribution
original histogram
target histogram
L r s
S=T(r)
𝑝𝑝(𝑟𝑟𝑖𝑖) ≠ 𝑝𝑝(𝑟𝑟𝑗𝑗) 𝑝𝑝(𝑠𝑠𝑖𝑖) = 𝑝𝑝(𝑠𝑠𝑗𝑗)
𝑃𝑃 𝑇𝑇 𝑟𝑟𝑖𝑖 < 𝑠𝑠 < 𝑇𝑇 𝑟𝑟𝑗𝑗 =
∫𝑟𝑟𝒊𝒊
𝑟𝑟𝑗𝑗
𝑝𝑝 𝑟𝑟 𝑑𝑑𝑑𝑑 =
1
𝐿𝐿−1
× (𝑇𝑇 𝑟𝑟𝒋𝒋 − 𝑇𝑇 𝑟𝑟𝑖𝑖 )
𝑖𝑖𝑖𝑖 𝑟𝑟𝑗𝑗 > 𝑟𝑟𝑖𝑖, 𝑠𝑠𝑗𝑗> 𝑠𝑠𝑖𝑖
a b
𝑃𝑃(𝑎𝑎 ≤ 𝑠𝑠 ≤ 𝑏𝑏)
𝑠𝑠 = 𝑇𝑇 𝑟𝑟 = (𝐿𝐿 − 1) �
0
𝑟𝑟
𝑝𝑝(𝑟𝑟) 𝑑𝑑𝑑𝑑
3.4 Histogram equalization
25. k refers to the different gray values in the original image
P(k) corresponds to the frequency of the value in all pixels of the original image
Histogram
( normalized )
Unique Pixel of
𝒇𝒇(𝒙𝒙,𝒚𝒚)
𝒓𝒓𝟏𝟏 𝒓𝒓𝟐𝟐 … 𝒓𝒓𝒋𝒋 … 𝒓𝒓𝒌𝒌
frequency of
𝒇𝒇(𝒙𝒙,𝒚𝒚)
𝒑𝒑𝟏𝟏 𝒑𝒑𝟐𝟐 … 𝒑𝒑𝒋𝒋 … 𝒑𝒑𝒌𝒌
Pixel of
𝒈𝒈(𝒙𝒙, 𝒚𝒚)/(𝑳𝑳 − 𝟏𝟏)
𝒑𝒑𝟏𝟏 𝒑𝒑𝟏𝟏+𝒑𝒑𝟐𝟐 … +…+𝒑𝒑𝒋𝒋 … +…+𝒑𝒑𝒌𝒌
Discrete situation: Gray value quantification The quantization value closest to its
value is taken as the final gray value
𝑠𝑠 = 𝑇𝑇 𝑟𝑟 = (𝐿𝐿 − 1) �
0
𝑟𝑟
𝑝𝑝(𝑟𝑟) 𝑑𝑑𝑑𝑑
3.4 Histogram equalization
36. 1. After histogram equalization, how does the gray level of
the new imagechange?
2. What are the advantages and disadvantages of gray
histogram equalization? (Human intervention required;
reversible; Is it valid in all cases?)
Purpose of Histogram Equalization
Principle of Histogram Equalization
Specific operation of histogram equalization
3.4 Histogram equalization
Summary and Discussion
39. 3.5 Histogram Processing
Linear stretch Histogram equalization
Transformation
function
Comparisonof
image
enhancement
Simple transformation
Can be transformed
back to the original
image
Need to manually set
parameters
Poor generality
Less information loss
Automated, no
parameters required
Unable to restore
Poor generality
42. Some improvement methods
3.5 Histogram Processing
LOCAL HISTOGRAM PROCESSING
Some differences and consistency in the
local area need to be preserved, but they
are often destroyed because the global
calculated value is obviously different from
the local calculated value.
p.150-153
45. Histogram equalization where we take any histogram, any pixel distribution, and
we match it to something which is as uniform as possible.
3.5 Histogram Processing
𝒇𝒇(𝒙𝒙,𝒚𝒚)
Input
image
𝒈𝒈(𝒙𝒙, 𝒚𝒚)
Target
image
𝑠𝑠 = 𝑇𝑇 𝑟𝑟 = �
0
𝑟𝑟
𝑝𝑝𝑟𝑟(𝑢𝑢) 𝑑𝑑𝑑𝑑
𝑠𝑠 = 𝑇𝑇 z = �
0
𝑟𝑟
𝑝𝑝𝑧𝑧(𝑣𝑣) 𝑑𝑑𝑑𝑑
Histogram matching (specification)
46. 3.5 Histogram Processing
Histogram matching (specification)
Just by doing the histogram equalizations, we can match any two desired distributions
Step1: Compute the histogram of the input image r, and do histogram equalization to
get the histogram-equalized image s1.
Step2: Compute the histogram of the target image z, and do histogram equalization
to get the histogram-equalized image s2.
Step3: For every value of s1, use the stored values of s2 from Step 2 to find the
corresponding value closest to s1 . Store these mappings from s1 to z
Step4: For every value of the image r, let the {𝑟𝑟𝑘𝑘} to be {𝑧𝑧𝑘𝑘
′
} ,by using the mappings
found in Step 3 to get the histogram-specified image.
47. The expressions of spatial domain processing
neighborhoodis of size 1 × 1
The function T can be linear or non-linear,
new gray value can be obtained by transforming
the original pixel value, or it can be obtained by
transforming the neighborhoodpixels.
48. 3.6 Fundamentals of Spatial Filtering
If a pixel value in an
image is lost (or affected
by noise), can we use the
information in other
place to estimate its
value?
It can be approximately equal to the
average of all values of the entire image
It can be approximately given by the
average of several nearby pixel values
The value of each pixel changes by
globally
or
locally Related to its location
49. Spatial filtering modifies an image by
replacing the value of each pixel by a
function of the values of the pixel and its
neighbors.
linear spatial filter
nonlinear spatial filter
𝑌𝑌 = 𝑊𝑊𝑊𝑊 + 𝑏𝑏
3.6 Fundamentals of Spatial Filtering
50. The mechanics of linear spatial
filtering
A linear spatial filter performs a sum-of-
products operation between an image f
and a filter kernel w
kernel :
an array ;
size defines the neighborhood of operation;
coefficients determine the nature of the filter;
also can be called mask, template, window
一种特征提取器
3.6 Fundamentals of Spatial Filtering
51. The mechanics of linear spatial filtering
The size of the kernel is odd, because we
must ensure that the current point we are
dealing with is in the exact center
m=2a+1; n=2b+1
3.6 Fundamentals of Spatial Filtering
52. 3.6 Fundamentals of Spatial Filtering
The mechanics of linear spatial filtering
with box kernels
of sizes 3 × 3,
11 × 11,
and 21 × 21
the larger the neighborhood, the
more pixels we are averaging
53. 3.6 Fundamentals of Spatial Filtering
Spatial correlation and convolution
Correlationconsists of moving the
center of a kernel over an image, and
computing the sum of products at each
location.
VS
spatial convolution are the same,
except that the correlation kernel is
rotated by 180°
when the values of a kernel are symmetricaboutits center, correlationand convolutionyield sameresult
54. 3.6 Fundamentals of Spatial Filtering
We can define correlation and convolution
so that every element of w(instead of just
its center) visits every pixel in f. This
requires that the starting configuration be
such that the right, lower corner of the kernel
coincides with the origin of the image.
the size of the resulting full correlation or
convolution array will be of size(by padding)
Sv ×S h :
55. 3.6 Fundamentals of Spatial Filtering
Spatial correlation and convolution
“convolving a kernel with an image” often is used to denote the sliding, sum-of-products process
Sometimes an image is filtered (i.e., convolved) sequentially, multistage filtering can be done in a
single filtering operation,
These convolution kernels can be combined and of course can be separated
57. 3.7 Smoothing (Lowpass) Spatial Filters
Average kernel
Becauserandomnoisetypicallyconsistsof sharp transitionsin intensity, an obviousapplicationof
smoothingis noisereduction.
The differencebetween each pixeland its surroundingpixels will be smaller than theoriginalones,
so smoothingfiltercan be used to smooththe imageand remove somefalsecontours.
BOX FILTER KERNELS
Smoothing is used to reduce irrelevantdetailin an image
The kernel should be normalized
with box kernels
of sizes 3 × 3,
11 × 11,
and 21 × 21
59. 3.7 Smoothing (Lowpass) Spatial Filters
Average kernel
LOWPASS GAUSSIAN FILTER KERNELS
circularlysymmetric(also
called isotropic)kernel
Distances from
the center for
various sizes of
square kernels.
圆形对称(也称为各向同性)核
60. 3.7 Smoothing (Lowpass) Spatial Filters
Average kernel
LOWPASS GAUSSIAN FILTER KERNELS
K = 1
𝜎𝜎 = 1
如果所有内核都是高斯,我们可以
在表中使用结果来计算复合内核的
标准偏差(并定义它),而无需实
际执行所有内核的卷积。
If all kernels are Gaussian, we can use the
composite kernel (and define it), without actually
performing the convolution of all kernels.
61. 3.7 Smoothing (Lowpass) Spatial Filters
Average kernel
kernel of size 21 × 21,
standard deviations 3.5
kernel of size 43 × 43,
standard deviations 3.5
box kernels
of sizes 11 × 11,
21 × 21
Comparison
62. 3.7 Smoothing (Lowpass) Spatial Filters
Average kernel Comparison
with a box kernel of size 71 × 71
Gaussian kernel of size 151
× 151, with K = 1 and 𝜎𝜎= 25
• box filter producedlinearsmoothing, with the transitionfrom blackto whitehavingthe shapeof a ramp
• the Gaussianfilter yieldedsignificantlysmoother results aroundthe edge transitions
63. 3.7 Smoothing (Lowpass) Spatial Filters
Applications
Using lowpass filtering and thresholding for region extraction
2566 × 2758 Hubble Telescope
image
Result of lowpass filtering
with a Gaussian kernel
size 151 × 151, 𝜎𝜎 = 25
Result of thresholding the filtered
image
Average kernel
64. 3.7 Smoothing (Lowpass) Spatial Filters
Applications
Shading correction using lowpass filtering
Lowpass filtering is a rugged, simple
method for estimating shading patterns
512 × 512 Gaussian kernel (four
times the size of squares), K = 1, and
𝜎𝜎= 128 (equal to the size of squares)
Average kernel
65. 3.7 Smoothing (Lowpass) Spatial Filters
Order-statistic (nonlinear) filters
response is based on ordering (ranking) the pixels contained in the region
encompassed by the filter
Smoothing is achieved by replacing the value of the center pixel with the value
determined by the ranking result.
median filter: replaces the value of the center pixel by the median of the intensity
values in the neighborhood of that pixel
FORCE POINTS TO BE MORE LIKE THEIR NEIGHBORS
median filter
66. 3.7 Smoothing (Lowpass) Spatial Filters
Order-statistic (nonlinear) filters median filter
image corrupted by salt-
and-pepper noise
result using 19 × 19
Gaussian lowpass filter
kernel with 𝜎𝜎= 3
result using 7 × 7
median filter
67. 3.8 Sharpening (Highpass) Spatial Filters
Distribution of grayscale changes in the image
Scan line
The gray distribution of the image
in the direction of the scan line
First derivative
Second derivative
69. 3.8 Sharpening (Highpass) Spatial Filters
The gradientof an image f at coordinates(x, y) is defined as the two dimensional column vector
Image gradient
The magnitude (length) of vector f , denotedas M(x, y)
First derivative
70. 3.8 Sharpening (Highpass) Spatial Filters
Image gradient: derivative operation --> differential operation
For discrete images, differentiation can be approximated by difference
||𝛻𝛻𝑓𝑓|| = (𝑓𝑓 𝑥𝑥,𝑦𝑦 − 𝑓𝑓 𝑥𝑥 + 1, 𝑦𝑦 )2+(𝑓𝑓 𝑥𝑥, 𝑦𝑦 − 𝑓𝑓 𝑥𝑥, 𝑦𝑦 + 1 )2
computationally to approximate the squares and square root operations by absolute values
𝛻𝛻𝑓𝑓 ≈ 𝑓𝑓 𝑥𝑥, 𝑦𝑦 − 𝑓𝑓 𝑥𝑥 + 1,𝑦𝑦 + |𝑓𝑓 𝑥𝑥, 𝑦𝑦 − 𝑓𝑓 𝑥𝑥, 𝑦𝑦 + 1 |
The magnitude of the gradient is approximated as the (absolute) sum
of the adjacent pixel differencesalong the horizontal and vertical axes
71. 3.8 Sharpening (Highpass) Spatial Filters
① The pixel value of the new image is directly replaced by the gradient of the original image
② The output image is according to the gradient threshold
Image Sharpening 图像锐化
72. 3.8 Sharpening (Highpass) Spatial Filters
Image Sharpening using gradient
The edges of the image are enhanced, and some noise is also amplified
73. Robert Operator
3.8 Sharpening (Highpass) Spatial Filters
The differential sum of the two
directions after rotating ±45°
The area involved in the calculation is too
small, and the obtained edge is weak
Image Sharpening using gradient
74. 3.8 Sharpening (Highpass) Spatial Filters
Image Sharpening using gradient 3*3 kernel
image
x
y
Maintaining directional consistency in the calculation, 3*3 can be viewed as a
superposition of multiple 2*2 regions with respect to the current pixel position.
82. 3.8 Sharpening (Highpass) Spatial Filters
second-order derivative of f (x)
Flexible extensions of the Laplace operator
1 -2 1
-2 4 -2
1 -2 1
Background features can be “recovered” while still preserving the sharpening effect of the
Laplacian by adding the Laplacian image to the original.
Let c = −1
91. 3.8 Combining Spatial Enhancement Methods
a nuclear
whole body
bone scan
image
Objective: show more of the skeletal detail
method: enhance the edges
Laplacian of image Sharpened image
92. 3.8 Combining Spatial Enhancement Methods
Objective: show more of the skeletal detail
method: enhance the edges and suppress noise
Sobel gradient of image Sobel image smoothed with a 5 × 5 box filter
Mask image formed by
the product of (b) and (e).
93. 3.8 Combining Spatial Enhancement Methods
Objective: show more of the skeletal detail method: enhance the edges and suppress noise
Sharpened image obtained
by the adding images (a) and (f).
95. Homework Deadline: before 9 April
1. Consider that the maximum value of an image 𝑰𝑰𝟏𝟏is M and its minimum is m
(m≠M). An intensity transform that maps the image 𝑰𝑰𝟏𝟏 onto 𝑰𝑰𝟐𝟐 such that the
maximal value of 𝑰𝑰𝟐𝟐 is L and the minimal value is:
2. Why global discrete histogram equalization does not, in general, yield a flat
(uniform) histogram?
A Because images are in color.
B Becausethe histogramequalizationmathematicalderivationdoesn’texist for discretesignals.
C In global histogramequalization, all pixels with the same value are mapped to same value.
D Actually, global discretehistogramequalizationalways yields flat histograms by definition.
96. Homework
3. Discrete histogram equalization is an invertible operation, meaning we can
recover the original image from the equalized one by inverting the operation,
since?
A Actually, histogram equalization is in general non-invertible.
B There is a unique histogram equalization formula per image.
C Pixels with different values are mapped to pixels with different values.
D Images have unique histograms.
4. Given an image with only 3 pixels and 4 possible values for each one. Determine
the number of possible different images and the number of possible different
histograms. How many images and histograms?
97. Homework
5. This image is a 6*6 grayscale image I(x, y) , with 4 gray levels
(x = 0, 1, 2, ... 5; y = 0, 1, 2, ..., 5) , the value of each point in the
figure represents the gray value of the image pixels.
1) Calculate the histogram of the image
2) Using histogram equalization to process this image (write the
process details )
3) Write the new histogram after histogram equalization.
98. Homework
6. Which integer number minimizes
7. Which integer number minimizes
8. Applying a 3×3 averaging filter to an image a large (infinity) number of times is:
A Equivalent to replacing all the pixel values by 0..
B Equivalent to replacing all the pixel values by the average of the values in the
original image.
C The same as applying it a single time.
D The same as applying a median filter.
99. 9. In the original image used to generate the three blurred images shown, the vertical
bars are 5 pixels wide, 100 pixels high, and their separation is 20 pixels. The image was
blurred using square box kernels of sizes 23, 25, and 45 elements on the side,
respectively. The vertical bars on the left, lower part of (a) and (c) are blurred, but a
clear separation exists between them. However, the bars have merged in image (b),
despite the fact that the kernel used to generate this image is much smaller than the
kernel that produced image (c). Explain the reason for this.
Homework