The convolution filter applies a kernel operator to the 2-D image
matrix. The kernel consists of a small square matrix, typically
3x3, that scans across the image matrix. The kernel is centered
on a pixel and each kernel element multiples its corresponding image
pixel. The sum of these products then determines the value of the
pixel in the destination image that corresponds to the pixel in
the source image at the center of the kernel. For example, an edge
detection kernel can consist of this 3x3 matrix:
0.0 |
-1.0 |
0.0 |
-1.0 |
4.0 |
-1.0 |
0.0 |
-1.0 |
0.0 |
A section of the source image data might go as in this matrix
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
1 |
1 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
.0 |
Where for the sake of simplicity we just give the pixels values
of 0 and 1. If we apply the kernel to the shaded region in the image
section as shown below, the sum of the products will result in a
value of 0 in the corresponding center pixel in the destination
matix.
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
1 |
1 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
.0 |
|
==>> |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
1 |
0 |
1 |
1 |
1 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
.0 |
|
If we moved the kernel to the shaded area here, then the
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
1 |
1 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
.0 |
|
==>> |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
1 |
2 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
.0 |
|
We want to apply the kernel to the entire image matrix. However,
a problem occurs at the borders of the image in that part of the
kernel will hang over the edge and not provide valid product values.
The convolution filter allows for two choices: the image border
values are set to 0 (EDGE_ZERO_FILL) or are left unchanged (EDGE_NO_OP).
If we choose the zero edge fill for our edge finding convolution,
the resulting image matrix becomes
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
2 |
0 |
0 |
0 |
2 |
2 |
0 |
0 |
2 |
2 |
0 |
0 |
0 |
0 |
0 |
0 |
.0 |
You can see that when this kernel is applied throughout a large
complex image, the uniform areas will be set to zero while borders
between two areas of different intensities will become enhanced.
Other kernels offer different effects. For example, a kernel such
as this one
0.0 |
-1.0 |
0.0 |
-1.0 |
6.0 |
-1.0 |
0.0 |
-1.0 |
0.0 |
would enhance the edge regions but not zero out the uniform regions
and thus provide a sharpening effect.
To create a edge detection instance of the ConvoleOp
class, we can use code like the following:
float edgeMat =
{ 0.0, -1.0, 0.0,
-1.0, 4.0, -1.0,
0.0, -1.0, 0.0};
ConvoleOp
edgeFinderOp = new ConvoleOp(new Kernal(3,3,edgeMat),ConvoleOp.EDGE_NO_OP,null);
(The last argument is for an optional RenderingHints
object that you can use to adjust the color conversion.) We can
then apply this convolution tool to an image as in
BufferedImage
edgeImg = edgeFinderOp.filter(anImage, null);
The ConvoleOp
class requires that the source and destination objects be different
BufferedImage
objects. The example program EdgeDetectApplet
shown below applies this convolution to an image.
Note the use of a JSplitPane
component to show the source and destination images beside each
other. The source and filter output images are placed on labels
as icons and the labels are each added to a JScrollPane.,
which provides for horizontal and vertical scroll bars if there
is insufficient room to display the entire image. The two scroll
panes are then inserted in the two panes of the JSplitPane.
EdgeDetectApplet
Resources: liftoff.jpg,
saturn.jpg,
saturnVoyager.jpg
|
import
javax.swing.*;
import java.awt.*;
import java.awt.image.*;
/** Demonstrate convolution filtering with an edge filter.**/
public class EdgeDetectApplet extends JApplet {
BufferedImage fSrcImage = null,fDstImage = null;
public void init () {
Container content_pane = getContentPane
();
fSrcImage = getBufImage ("saturnVoyager.jpg");
if (fSrcImage == null) {
System.out.println
("Error in reading image file!");
return;
}
edgeFilter ();
ImageIcon src_icon = new ImageIcon
(fSrcImage);
ImageIcon dst_icon = new ImageIcon
(fDstImage);
JLabel src_display = new JLabel
(src_icon);
JLabel dst_display = new JLabel
(dst_icon);
JScrollPane src_pane = new JScrollPane
(src_display);
JScrollPane dst_pane = new JScrollPane
(dst_display);
// Use a JSplitPane to show the
source and destination
// images
JSplitPane split_pane =
new JSplitPane (JSplitPane.HORIZONTAL_SPLIT,
true,
src_pane, dst_pane);
split_pane.setResizeWeight (0.5);
split_pane.setContinuousLayout (true);
// Add the DrawingPanel to the contentPane.
content_pane.add (split_pane);
} // init
/** Create the filter to use for the convolution.**/
void edgeFilter () {
float[] edge = {
0f, -1f, 0f,
-1f, 4f,
-1f,
0f, -1f, 0f
};
ConvolveOp op = new
ConvolveOp (
new Kernel (3, 3, edge),
ConvolveOp.EDGE_NO_OP, null);
fDstImage = op.filter
(fSrcImage,null);
} // edgeFilter
/**
* Download the image file and convert
to a
* BufferedImage object.
**/
BufferedImage getBufImage (String image_name){
// Get the image
Image img = getImage (getCodeBase(),
image_name);
// and use a MediaTracker to load
it before converting it to
// a BufferedImage.
try {
MediaTracker tracker
= new MediaTracker (this);
tracker.addImage (img,0);
tracker.waitForID (0);
} catch (InterruptedException e)
{ return null; }
int width = img.getWidth (this);
int height= img.getHeight (this);
BufferedImage buffered_image =
new BufferedImage (width,
height, BufferedImage.TYPE_INT_RGB);
Graphics2D g2 = buffered_image.createGraphics
();
g2.drawImage (img,0 ,0, null);
return buffered_image;
} // getBufImage
} // class EdgeDetectApplet
|
References & Web Resources
Latest update: March 8, 2006
|