Sunday, February 14, 2010

[Activity] 3 : Hough Transform

Activity 3 : Hough Transform

In this activity, we are to explore the method behind the Hough Transform and why the method can accurately detect lines in an image. The input image is this activity is shown below:


Input image

Since Matlab already has a built-in function for Hough Transform, we only need to specify the parameters that in the function. The built-in Matlab function hough outputs a transform matrix, an array for the angle and another array for the distance from the origin. These output data is then subsequently used in other Matlab functions such houghlines and houghpeaks.

The houghpeaks functions extracts the "peaks" in the transform matrix generated by the hough function while the houghlines function extracts the lines in the logical image of the input image based on the angle and distance information together with the transform matrix. The output of the program is shown below.
Output image

Hough Transform

Hough Transform works by converting an image (which is transformed into a binary image and is edge-detected) from its rectangular, coordinate space into the normal or parametric space. Using the input image as the example, the following is the image turned into BW and was edge-detected.

Edge-detected input image.

Each of the remaining pixels in the image has a coordinate (x,y). For each point, we would try to identify all lines that pass through it (which is, infinite). By making the angle of these lines discrete (let's say, at 2, 5, or 10 degree intervals), we would have a list of lines for each point. Since an equation of the line is defined by

y = mx + b,

we could define this lines passing through each point with respect to the (m,b) parameters. However, this would be problematic as the 'm' parameter would yield infinite values for perpendicular lines. Fortunately, we could still represent the lines by writing them in terms of the normal or parametric form. The normal or parametric representation has the form

Align Leftxcos(θ) + ysin(θ) = ρ.

Since we have already discretized the θ, given a pixel with (x, y) coordinate, we would have respective discrete values of ρ then. The θ and ρ represent the angle line perpendicular to the line containing the point (x,y) with respect to the positive x (+x) axis and the the distance of the line from the origin, respectively. We do this step for all the pixels and listing the resulting theta and rho values

When tabulated, this rho and theta values form an image similar to the one below. The image below is the logical image above represented in the parametric space with theta and rho as the coordinates.

Edge detected image in normal or parametric space.

Each sinusoid represents the rho values for varying theta values. Each (x,y) pixel will have a correspondingly unique (θ, ρ) value. The image above is actually a tabulation of theta and rho values extracted from the logical of the input image. The brighter the pixels in the image above, the more likelihood that lines are present in the image since the curves intersect at the (θ, ρ), indicating collinear points in the (x,y) space.

This is how the Hough Transform can detect lines in the image, by utilizing this tabulating method. The table as a result of this process is called the Hough Transform matrix. By selecting a threshold value, we could eliminate most pixels and choose the brightest ones. The brightest ones would generally mean that it it a candidate point where lines pass through it in the (x,y) space. This process of extracting lines from the parametric space and laying them back to the cartesian space is commonly called as de-houghing.

Code Listing:
function [H, theta, rho, peaks] = hough_act(image)
I = imread(image);
BW = edge(I, 'canny');
[H, theta, rho] = hough(BW);
peaks = houghpeaks(H, 20);
figure, imagesc(H)
figure, imshow(BW)
lines = houghlines(BW, theta, rho, peaks);
figure, imshow(I)
hold on
for k = 1:numel(lines)
x1 = lines(k).point1(1);
y1 = lines(k).point1(2);
x2 = lines(k).point2(1);
y2 = lines(k).point2(2);
line([x1 x2],[y1 y2],'Color','g')
end
hold off

I would like to give myself a grade 0f 10 for this activity for successfully implementing the Hough Transform in Matlab and was able to understand the theory behind it.

References:
[1] Hough Transform Activity Sheet, Dr. Maricor Soriano.
[2] "Hough Transform", PlanetMath.org website, http://planetmath.org/encyclopedia/HoughTransform.html

[Activity] 2 : Silhouette Processing

Activity 2: Silhouette Processing

In this activity, we try to extract the contour of an object and replace the edge by the Freeman Vector code. This method has been shown to be an effective tool in biometric identification by utilizing the gait pattern of persons. It has been shown that when the Freeman Vector of a person's gait pattern is properly-aligned, it shows consistency and distinct periodicity that can be used to identify a person. The whole process is summarized by the image below.[1]

Block diagram of the method [1]

The first step in this activity is to take a picture of an object with relatively good contrast compared to its background. This would make the extraction of its contour/edge much easier. The image used for this activity is shown below.


Input image

After some morphological operations (turning the image to grayscale then binary, performing edge detection), the resulting image is shown below.

Resulting image after edge detection.

The image above will then be the used to extract the Freeman Vector code. Arbitrarily choosing a starting pixel, the values for the other pixels will be replaced by the value depending on its position relative to the previously processed pixel. The value is determined by the image below where point (x,y) is the current pixel location.


Freeman vector value determiner


The Freeman vector derived from the contour/edge information of the object is then further processed to retrieve new curvature string values. This new vector string contains positive, negative and zero strings that describe whether a certain the part of the contour is convex, concave or relatively straight, respectively. Between the Freeman vector and the curvature string values, there is an intermediate vector in between.

From the Freeman vector, we get the difference between two adjacent values. In this step, the first element of the vector is treated as the last to maintain continuity. From this intermediate vector, we perform a running sum of three adjacent values to retrieve the curvature string. The processes is illustrated below.

Curvature String extraction from the Freeman Vector Chain

The output of the program is shown below. Positive string values for convex contour, negative for concavities and strings of zero for straight lines.

Positive values for convexities

Negative values for concavities

Zero string for straight lines

Code Listing:

function freeman(image)
image = rgb2gray(imread(image));
BW = im2bw(image, 0.16);
BW = imresize(BW, 0.5);
% se = strel('disk', 10);
% BW = imclose(BW, se);
se = strel('disk', 3);
BW = imdilate(BW, se);
BW = edge(double(BW), 'canny');

figure, imshow(BW);
%
x1 = 0;
y1 = 0;

%search for the upperleftmost pixel
[x, y] = size(BW);
for row=1:1:x
for col=1:1:y
if (BW(row, col) == 1),
x1 = row;
y1 = col;
break;
end
end
end
pixels = bwtraceboundary(BW, [x1,y1], 'N');
[counter, temp] = size(pixels);
freemanvec = zeros(counter,1);
for j=2:1:counter
diff_x = pixels(j,1) - pixels(j-1,1);
diff_y = pixels(j,2) - pixels(j-1,2);
if (diff_x == -1) && (diff_y == -1)
freemanvec(j) = 1;
elseif (diff_x == 0) && (diff_y == -1)
freemanvec(j) = 2;
elseif (diff_x == 1) && (diff_y == -1)
freemanvec(j) = 3;
elseif (diff_x == 1) && (diff_y == 0)
freemanvec(j) = 4;
elseif (diff_x == 1) && (diff_y == 1)
freemanvec(j) = 5;
elseif (diff_x == 0) && (diff_y == 1)
freemanvec(j) = 6;
elseif (diff_x == -1) && (diff_y == 1)
freemanvec(j) = 7;
elseif (diff_x == -1) && (diff_y == 0)
freemanvec(j) = 8;
end
end
freemanvec(1) = freemanvec(length(freemanvec));

gradient = zeros(counter,1);
runsum = zeros(counter,1);

for i=1:1:counter-1
gradient(i) = freemanvec(i+1) - freemanvec(i);
end
gradient(length(freemanvec)) = gradient(1);

for i=2:1:counter-1
runsum(i) = gradient(i-1) + gradient(i) + gradient(i+1);
end
runsum(1) = gradient(1) + gradient(2) + gradient(counter-1);
figure, plot([1 250],[1 250],'Color', 'w')
for i=1:1:counter
row = pixels(i,1);
col = pixels(i,2);
text(col, row, num2str(runsum(i)));
end;

I give myself a grade of 10 for accomplishing the objectives for this activity.


References:
[1] Soriano, et al., "Curve Spreads - a biometric from front-view gait video", Pattern Recognition Letters 25 (2004) 1595 - 1602
[2] Activity 2: Silhouette Processing Activity Sheet, Physics 305, M. Soriano

[Activity] 1 : Tracking using Adaptive Histogram - Color Locus

Activity 1 : Tracking using Adaptive Histogram - Color Locus

Given a picture, we could actually highlight parts of the image having the same color. This can be achieved by back projecting a specific region of the RG-histogram corresponding to the desired color. Expand this method to videos and we have a technique for color-based object tracking, given that the object to be tracked has a distinct color compared to its background/environment.

This method has been used in face-tracking applications. A face of a person, regardless of race, can be tracked even in varying lighting given that the skin locus of a person's face have been properly modeled in RG-colorspace [1].

In the example below, we would try to highlight the green leaves in the picture. We then select a small patch containing the colored pixels that we would like to highlight in the picture.

A test image and a selected color patch to be highlighted.

Shown below is the mapping of the pixels in the selected patch in the RG-Histogram. It can be seen that the pixels lie on the Green area of the RG-histogram. An image of the whole RG Chromaticity range of values can be seen beside the image. The mapping of the selected color patch to the RG-Histogram was achieved by histogram backprojection.

Left: The selected color patch mapped in the RG-Histogram Colorspace; Right: RG Chromaticity Range [2]


To create the RG-histogram for the subject, a function was made to accept color patches from the subject and output a RG-histogram based on the input patches. In the resulting RG-histogram, only the pixels found in the color patches will be used in the backprojection. The resulting image is shown below:


The resulting image with the green pixels highlighted.


This method can be extended to track subjects in a video sequence. The whole process is summarized by the image below.


An image showing the block diagram of the algorithm. [1]

The program starts by selecting a patch in the image containing the color pixels that we want to track. In the paper of Soriano, et al., [1], the method was implemented to build a robust face tracking program, but their algorithm can be implemented on other objects given that the object also have a distinct color compared to its background. Each frame of the video is backprojected to the RG-color histogram of the ROI. The image below shows the RG-histogram for the intended subject and the color patches where the RG-histogram was derived.



The color patches used and their corresponding RG-histogram

It is expected that after this step, the object that we want to segment will be highlighted, together with the other pixels in the image having the same color. Binary and morphological operations can be implemented to create blobs out of these pixels for easier manipulation. Small blobs can be eliminated and the remaining large blobs represent our subject. The first video below is a sample input video to the program. The second video below that is the resulting video after the RG-histogram backprojection was implemented.


Sample input video.


Resulting video after histogram backprojecting each video frame.

The blobs in the second video above is the basis for creating the yellow boxes for tracking the glove. Additional optimization to the program ignored most of the other areas of the succeeding images by focusing only to an area with close proximity to the current blob. This would eliminate all other possible candidates with the same color as of that of the object being tracked but is located farther from the subject. Another advantage of this optimization is that it would speed up the execution of the program.

The sample output of the program is shown below. It can be seen from the video that the glove was successfully tracked even though that in some frames, the face and the glove share some similar shades of colors.


Sample output from the program.

Code Listing:

function act1(input, outputname, locus)

start = cputime;
% Create assummed Histogram
BINS = 32;

rg_histo = locus';

%% Open video image files and backproject rg_histogram to image frames

% open video file
video_input = mmreader(input);
new_video(video_input.NumberOfFrames) = struct('cdata',[],'colormap',[]);

imageinput = read(video_input,1);
imageinput = imresize(imageinput, 0.5);
origimage = imageinput;
[m, n] = size(imageinput(:,:,1));
bigR = double(imageinput(:,:,1)) + 0.00001;
bigG = double(imageinput(:,:,2)) + 0.00001;
bigB = double(imageinput(:,:,3)) + 0.00001;
bigI = bigR + bigG + bigB;
bigr = bigR ./ bigI;
bigg = bigG ./ bigI;
bigrint = round(bigr*(BINS-1) + 1);
biggint = round(bigg*(BINS-1) + 1);
newimage = zeros(m,n);
for k=1:1:m
for j=1:1:n
newimage(k,j) = rg_histo(bigrint(k,j), biggint(k,j));
end
end
BW = bwareaopen(logical(newimage), 700, 26);
size(BW);
BW = imfill(BW, 'holes');
[newbox newboundary] = computeboundbox(BW);
imshow(origimage);
rectangle('Position', newbox, 'EdgeColor', 'yellow');
new_video(1) = getframe;

for i=2:1:video_input.NumberOfFrames
imageinput = read(video_input,i);
imageinput = imresize(imageinput, 0.5);
[m, n] = size(imageinput(:,:,1));
origimage = imageinput;
a = newboundary(1);
b = newboundary(2);
c = newboundary(3);
d = newboundary(4);
imageinput(:,1:1:a,:) = 0;
imageinput(1:1:b,:,:) = 0;
imageinput(:,a+c:1:end,:) = 0;
imageinput(b+d:1:end,:,:) = 0;
bigR = double(imageinput(:,:,1)) + 0.00001;
bigG = double(imageinput(:,:,2)) + 0.00001;
bigB = double(imageinput(:,:,3)) + 0.00001;
bigI = bigR + bigG + bigB;
bigr = bigR ./ bigI;
bigg = bigG ./ bigI;
bigrint = round(bigr*(BINS-1) + 1);
biggint = round(bigg*(BINS-1) + 1);
newimage = zeros(m,n);
size(bigrint);
size(biggint);
for k=b:1:b+d
for j=a:1:a+c
newimage(k,j) = rg_histo(bigrint(k,j), biggint(k,j));
end
end
BW = bwareaopen(logical(newimage), 700, 26);
BW = imfill(BW, 'holes');
[newbox newboundary] = computeboundbox(BW)
imshow(origimage);
rectangle('Position', newbox, 'EdgeColor', 'yellow');
new_video(i) = getframe;
end
movie2avi(new_video, outputname);
endtime = cputime - start;
endtime

%%

function [statsboundbox newbox] = computeboundbox(image)
[m n] = size(image);
BW = bwareaopen(image, 500, 4);
BW = imfill(BW, 'holes');
statsboundbox1 = regionprops(double(BW), 'BoundingBox');
statsboundbox = statsboundbox1.BoundingBox;
newbox = zeros(4,1);
pixelincrease = 15;
if (statsboundbox(1) - pixelincrease) >= 1
newbox(1) = round(statsboundbox(1) -pixelincrease);
else
newbox(1) = 1;
end
if (statsboundbox(2) - pixelincrease) >= 1
newbox(2) = round(statsboundbox(2) - pixelincrease);
else
newbox(2) = 1;
end
if (statsboundbox(3) + pixelincrease*2) <= n
newbox(3) = statsboundbox(3) + pixelincrease*2;
else
newbox(3) = n;
end
if (statsboundbox(4) + pixelincrease*2) <= m
newbox(4) = statsboundbox(4) + pixelincrease*2;
else
newbox(4) = m;
end
newbox = newbox'

function [locusmatrix] = colorlocus

BINS = 32;
locusmatrix = zeros(BINS,BINS);

colorpatch = input('Enter filename of colopatch, press ''N'' to end. ', 's');
while (colorpatch ~= 'N')
% Nonparametric Segmentation
image12 = imread(colorpatch);
imageR = double(image12(:,:,1)) + 0.00001;
imageG = double(image12(:,:,2)) + 0.00001;
imageB = double(image12(:,:,3)) + 0.00001;
imageI = imageR + imageG + imageB;
r = imageR ./ imageI;
g = imageG ./ imageI;

BINS = 32;
rint = round(r*(BINS-1) + 1);
gint = round(g*(BINS-1) + 1);
colors = gint(:) + (rint(:)-1)*BINS;
hist = zeros(BINS,BINS);

for row = 1:BINS
for col = 1:(BINS-row+1)
hist(row,col) = length(find(colors==(((col + (row-1)*BINS)))));
end
end
locusmatrix = locusmatrix + hist';
colorpatch = input('Enter filename of colorpatch, press ''N'' to end. ', 's');
end
figure, contour(locusmatrix>0)
hold
x = 0:1:BINS+1;
y = BINS+1:-1:0;
line(x,y);
title('RG Histogram of Color of Interest');
xlabel('R axis');
ylabel('G axis');
locusmatrix = locusmatrix > 0;


I give myself a grade of 10 in this activity for successfully implementing the algorithm together with recommended optimizations.

References:
[1] "Adaptive skin color modeling using the skin locus for selecting training pixels", Soriano, et al, Pattern Recognition 36 (2003) 681–690
[2] "Activity 12: Color Image Segmentation", Orly Tarun, http://otarun.blogspot.com/2009/08/activity-12-color-image-segementation.html

Monday, January 4, 2010

[Review] A4 : Enhancement by Histogram Manipulation

A histogram of an image provides a quick glance of the tonal information that can be seen in the image. An image histogram can tell whether an image occupies a wide range of contrast or it is clumped in a relatively closer range [1].

Graphically, it plots the tonal (grayscale) value vs. the frequency of the said value. The extreme left side of the graph (small grayscale values) represent the darker (black) pixels while the extreme right represent the brighter pixels(white). The middle values represent the gray pixels. A histogram, when normalized, is called a Probability Distribution Function (PDF). It replaces the frequency of the grayscale values with its probability of appearing in the image. When the PDF is added cumulatively, the graph becomes a Cumulative Distribution Function (CDF). Examples of an image histogram, a PDF and CDF is shown below:

An Image Histrogram



A Probability Distribution Function (PDF) of a certain image [3]


A Cumulative Distribution Function (CDF) of a certain image [3]


Unfortunately, some images might be too dark (histogram is concentrated on the left side) or to bright (histogram concentrated on the right side) for some applications. An example is shown below:


Different histogram distribution [2]

To solve this problem, we could alter an image's histogram in order to redistribute its tonal information (grayscale values) to a wider range of values or to a specific range of interest. This process is called histogram manipulation. The steps for histrogram manipulation (also called CDF backprojection is enumerated below [3].
1. From the pixel value in the original image, find the corresponding CDF value.
2. Trace this CDF value in the desired CDF graph.
3. Find the corresponding new grayscale value with the given CDF value in the desired CDF graph (G(z)).
4. Replace all pixels in the original image with this new value.

To implement the activity, I have downloaded an image from a web showing an x-ray image with an uneven contrast contribution:


An X-ray Image

Below is a plot if its CDF and beside it is a plot of the desired histogram for it to have a more distributed contrast. We could see that many of the pixels are distributed in the darker shades as shown by the graph which is skewed to the left.


And below is the snippet of the code for the backprojection:

xdomain = linspace(double(min(image(:))),double(max(image(:))),256);
yrange = linspace(0,1,256);

for i=1:1:length(cdf)
% find the cdf value of the nonzero grayscale value in the image
location=find(image == i);
currentcdf = cdf(i);

% find the new grayscale value given the CDF value
% based on the linear CDF
new_grayscale_value = max(round(xdomain(find(yrange <= currentcdf))));
% replace old grayscale values with the new numbers
image2(location) = new_grayscale_value(1);
% end
end

Below is the resulting histogram equalized image from the backprojection process:
From the images above, we could see more details in the histogram equalized image as it "lights up" areas of the image that is too dim to see in the original image.

Shown below is the resulting CDF of the histogram equalized image. We could see that the new CDF now more similar to the desired CDF as it follows a more linear form.

The graphs below shows the PDF of the original image and the PDF of the modified image. The second PDF shows that the pixel values are distributed in a wider range of values.


Further, our eyes can mimic a nonlinear response. Examples of nonlinear response are exponential and logarithmic. Exponential and Logarithmic CDFs can actually redistribute pixel values either to the darker or brighter shades, whichever is necessary for a specific application. Shown below are the CDF graphs for a histogram equalized image, for an image that has more brighter pixels (exponentials) and one that has more darker pixels (logarithmic)


The following lines of code were executed to come up with the CDFs shown above:

degree = 4;
exponential = exp(degree*yrange)-1;
exponential = exponential./max(exponential);
logarithmic = 1 - exponential;
logarithmic = logarithmic(end:-1:1);

Exponential CDFs redistribute the pixels in a way that it subdues the darker pixels, and thus, in the process, highlights the brighter pixels. A logarithmic CDF do the exact opposite. This explanation is illustrated by using the histogram equalized image in the previous operations and let in undergo again back projections. But this time, the exponential and the logarithmic CDFs are now the desired CDFs.
As expected, the exponential CDF highlighted the brighter pixels and subdued the darker pixels while the opposite happens in the logarithmic CDF. The PDF and the CDF of the resulting images are shown below. The PDF shows that for the exponential PDF, the pixels are more distributed in the lighter side of the spectrum while for the logarithmic, the pixels are distributed in the other side.


Complete Code Listing:

function act4(image)
image = imread(image);
image2 = image;
[pdf cdf] = get_pdf_cdf(image);
% cdf_orig
% create x-axis and y-axis
xdomain = linspace(double(min(image(:))),double(max(image(:))),256);
yrange = linspace(0,1,256);

% plot the original cdf and the desired cdf
% figure
% subplot(121), plot(cdf)
% xlabel('Possible Grayscale Values')
% ylabel('CDF')
% title('Original CDF')
% subplot(122), plot(xdomain, yrange)
% xlabel('Possible Grayscale Values')
% ylabel('CDF')
% title('Desired CDF')

for i=1:1:length(cdf)
% find the cdf value of the nonzero grayscale value in the image
location=find(image == i);
currentcdf = cdf(i);

% find the new grayscale value given the CDF value
% based on the linear CDF
new_grayscale_value = max(round(xdomain(find(yrange <= currentcdf))));
% replace old grayscale values with the new numbers
image2(location) = new_grayscale_value(1);
% end
end

% figure
% subplot(121), imshow(image)
% title('Original Image')
% subplot(122), imshow(image2)
% title('Histogram Equalized')

[pdf2 cdf2] = get_pdf_cdf(image2);

% figure
% subplot(121), plot(cdf)
% title('Original Image')
% subplot(122), plot(cdf2)
% title('Histogram Equalized')

% % for the nonlinear desired CDF
image3 = image2;
image4 = image2;
degree = 4;
exponential = exp(degree*yrange)-1;
exponential = exponential./max(exponential);
logarithmic = 1 - exponential;
logarithmic = logarithmic(end:-1:1);

% % plot the original cdf and the desired cdf
% figure
% subplot(131), plot(cdf2)
% xlabel('Possible Grayscale Values')
% ylabel('CDF')
% title('Original CDF')
% subplot(132), plot(xdomain, exponential)
% xlabel('Possible Grayscale Values')
% ylabel('CDF')
% title('Desired CDF')
% subplot(133), plot(xdomain, logarithmic)
% xlabel('Possible Grayscale Values')
% ylabel('CDF')
% title('Desired CDF')
%
%
for i=1:1:length(cdf2)
% find the cdf value of the nonzero grayscale value in the image
location=find(image == i);
currentcdf = cdf2(i);

% find the new grayscale value given the CDF value
% based on the linear CDF
new_grayscale_value3 = max(round(xdomain(find(exponential <= currentcdf))));
new_grayscale_value4 = max(round(xdomain(find(logarithmic <= currentcdf))));
% replace old grayscale values with the new numbers
image3(location) = new_grayscale_value3(1);
image4(location) = new_grayscale_value4(1);
% end
end
%
% figure
% subplot(131), imshow(image)
% title('Original Image')
% subplot(132), imshow(image3)
% title('Exponential Histogram')
% subplot(133), imshow(image4)
% title('Logarithmic Histogram')
%
%
% figure
% subplot(131), imshow(image2)
% title('Histogram Equalized Image')
% subplot(132), imshow(image3)
% title('Exponential Histogram')
% subplot(133), imshow(image4)
% title('Logarithmic Histogram')
%
[pdf3 cdf3] = get_pdf_cdf(image3);
[pdf4 cdf4] = get_pdf_cdf(image4);
%
% % % plot the histogram equalized cdf and the exponential cdfs
% figure
% subplot(131), plot(cdf2)
% xlabel('Possible Grayscale Values')
% ylabel('CDF')
% title('Original CDF')
% subplot(132), plot(cdf3)
% xlabel('Possible Grayscale Values')
% ylabel('CDF')
% title('Resulting CDF')
% subplot(133), plot(cdf4)
% xlabel('Possible Grayscale Values')
% ylabel('CDF')
% title('Resulting CDF')

figure
subplot(121), plot(pdf)
xlabel('Possible Grayscale Values')
ylabel('Probability')
title('Original PDF')
subplot(122), plot(pdf2)
xlabel('Possible Grayscale Values')
ylabel('Probability')
title('Resulting PDF')

figure
subplot(121), plot(pdf3)
xlabel('Possible Grayscale Values')
ylabel('Probability')
title('Exponential PDF')
subplot(122), plot(pdf4)
xlabel('Possible Grayscale Values')
ylabel('Probability')
title('Logarithmic PDF')

function [pdf cdf] = get_pdf_cdf(image)
[count, values] = imhist(image);
numpoints = sum(count);
pdf = count/numpoints;
cdf_orig = zeros(256,1);
cdf_orig(1) = pdf(1);
for i=2:1:256
cdf_orig(i) = pdf(i) + cdf_orig(i-1);
end
cdf = cdf_orig;

I give myself a grade of 9 for completing the requirements. Though I did have a hard time looking for a solution for back projecting the new values since the points in the graph have only discrete values. *The '<=' sign did the trick. :)



[1] http://en.wikipedia.org/wiki/Image_histogram
[3] Maam Maricor's Activity Handout