Since I ran out of time in class (and you might want to do this before Tuesdays class)…

## A good discrete Blur Kernel

A good discrete “blur” kernel (or family of blur kernels of various sizes) can be made by repeated convolutions of the “unit box”.

In 1D, if you start with the smallest (unit box) “box”: [1/2 1/2] (it’s called a box because when you graph it, it looks like a box)

And then convolve it with the unit box (in this case, itself), you get a filter of length 3 [1/4 1/2 1/4]. Or you could write it as 1/4 [1 2 1].

If you convolve this one with the unit box, you get 1/6 [1 3 3 1]

If you convolve this with the unit box, you get 1/16 [1 4 6 4 1]

And you can see how you can keep making bigger and bigger blur kernels by repeatedly doing this.

For 2D…

There is a kernel that is formed by doing those 1D kernels above in each direction separately. So, for example, you can apply the 1D kernel 1/4 [1 2 1] to each row. Then apply it to all the columns.

Or, you can make a kernel using the same process (convolutions of the unit box) which leads to:

1 | 2 | 1 |

2 | 4 | 2 |

1 | 2 | 1 |

(this needs to be scaled by 1/16th – since we want it to add up to 1). Turns out this is exactly the same as using the 2 1D kernels separately in each direction. It turns out that not all 2D convolutions can be done as 2 1D convolutions – but this one can (as can most blur kernels). It can be much more efficient to do 2 1D kernels than a 2D kernel.

## Choosing Kernels for Down Sampling

When you downsample (say reduce an image by a factor of 2) you need to use a blur to get rid of the high frequencies. There’s a cool piece of math (thanks to the Nyquist theorem) that tells us exactly the right filter to use. Unfortunately, we aren’t going to teach you that math. Instead use some other bluring kernel that approximates it.

For downsampling by 2, a 3×3 blur kernel is almost good enough (a 5×5 will be a little better). If you try to use unweighted averaging (all a constant value) you will get more aliasing (and actually more excess blurring at the same time) than if you use the blur kernels described above. It turns out that the 1/4 [1 2 1] kernel (or it’s 2D equivalent that is shown in the tic-tac-toe board above) is a pretty decent approximation for the “right” kernel for 2D downsampling. Certainly good enough for this assignment.

If you want to downsample by an amount other than 2, you’ll need a “bigger” blur kernel. A rough rule of thumb (for using those simple kernels above) is that the kernels should overlap when placed at adjacent samples. When there’s 1 space between each sample, pick a kernel with at least 1 non-zero of each side of the center, and maybe 2. So, if I am going to take every third sample, a kernel of size 5 (1 4 6 4 1 above) is just barely big enough. Using X for samples I am going to evaluate (and O for samples I’ll skip):

O O X O O X O O X O O X O O X ... 1 4 6 4 1 1 4 6 4 1

Notice how when I sample at the target locations (no use computing the convolution at places I am not sampling) the kernels overlap completely – nothing gets missed. Making the kernel a little bigger (so that sampling at one X picks up the neighbors) is a little but too much blurring – but will lead to less aliasing.

## Interpolating for Upsampling

When you want to upsample (have more samples than you had originally), you can do interpolation (but that is actually a form of filtering – as we explained in class). Given that we don’t really have enough of a chance to see why filtering is a better way to think about it (it will let us do things better than linear interpolation) – linear interpolation is just fine for the assignment. In fact, if you do nearest neighbor (pixel replication) it’s probably good enough to get full credit for the assignment.