Activity 1 : Tracking using Adaptive Histogram - Color Locus
Given a picture, we could actually highlight parts of the image having the same color. This can be achieved by back projecting a specific region of the RG-histogram corresponding to the desired color. Expand this method to videos and we have a technique for color-based object tracking, given that the object to be tracked has a distinct color compared to its background/environment.
This method has been used in face-tracking applications. A face of a person, regardless of race, can be tracked even in varying lighting given that the skin locus of a person's face have been properly modeled in RG-colorspace [1].
In the example below, we would try to highlight the green leaves in the picture. We then select a small patch containing the colored pixels that we would like to highlight in the picture.
A test image and a selected color patch to be highlighted.
Shown below is the mapping of the pixels in the selected patch in the RG-Histogram. It can be seen that the pixels lie on the Green area of the RG-histogram. An image of the whole RG Chromaticity range of values can be seen beside the image. The mapping of the selected color patch to the RG-Histogram was achieved by histogram backprojection.
Left: The selected color patch mapped in the RG-Histogram Colorspace; Right: RG Chromaticity Range [2]
To create the RG-histogram for the subject, a function was made to accept color patches from the subject and output a RG-histogram based on the input patches. In the resulting RG-histogram, only the pixels found in the color patches will be used in the backprojection. The resulting image is shown below:
The resulting image with the green pixels highlighted.
This method can be extended to track subjects in a video sequence. The whole process is summarized by the image below.
An image showing the block diagram of the algorithm. [1]
The program starts by selecting a patch in the image containing the color pixels that we want to track. In the paper of Soriano, et al., [1], the method was implemented to build a robust face tracking program, but their algorithm can be implemented on other objects given that the object also have a distinct color compared to its background. Each frame of the video is backprojected to the RG-color histogram of the ROI. The image below shows the RG-histogram for the intended subject and the color patches where the RG-histogram was derived.
The color patches used and their corresponding RG-histogram
It is expected that after this step, the object that we want to segment will be highlighted, together with the other pixels in the image having the same color. Binary and morphological operations can be implemented to create blobs out of these pixels for easier manipulation. Small blobs can be eliminated and the remaining large blobs represent our subject. The first video below is a sample input video to the program. The second video below that is the resulting video after the RG-histogram backprojection was implemented.
Sample input video.
Resulting video after histogram backprojecting each video frame.
The blobs in the second video above is the basis for creating the yellow boxes for tracking the glove. Additional optimization to the program ignored most of the other areas of the succeeding images by focusing only to an area with close proximity to the current blob. This would eliminate all other possible candidates with the same color as of that of the object being tracked but is located farther from the subject. Another advantage of this optimization is that it would speed up the execution of the program.
The sample output of the program is shown below. It can be seen from the video that the glove was successfully tracked even though that in some frames, the face and the glove share some similar shades of colors.
Sample output from the program.
Code Listing:
function act1(input, outputname, locus)
start = cputime;
% Create assummed Histogram
BINS = 32;
rg_histo = locus';
%% Open video image files and backproject rg_histogram to image frames
% open video file
video_input = mmreader(input);
new_video(video_input.NumberOfFrames) = struct('cdata',[],'colormap',[]);
imageinput = read(video_input,1);
imageinput = imresize(imageinput, 0.5);
origimage = imageinput;
[m, n] = size(imageinput(:,:,1));
bigR = double(imageinput(:,:,1)) + 0.00001;
bigG = double(imageinput(:,:,2)) + 0.00001;
bigB = double(imageinput(:,:,3)) + 0.00001;
bigI = bigR + bigG + bigB;
bigr = bigR ./ bigI;
bigg = bigG ./ bigI;
bigrint = round(bigr*(BINS-1) + 1);
biggint = round(bigg*(BINS-1) + 1);
newimage = zeros(m,n);
for k=1:1:m
for j=1:1:n
newimage(k,j) = rg_histo(bigrint(k,j), biggint(k,j));
end
end
BW = bwareaopen(logical(newimage), 700, 26);
size(BW);
BW = imfill(BW, 'holes');
[newbox newboundary] = computeboundbox(BW);
imshow(origimage);
rectangle('Position', newbox, 'EdgeColor', 'yellow');
new_video(1) = getframe;
for i=2:1:video_input.NumberOfFrames
imageinput = read(video_input,i);
imageinput = imresize(imageinput, 0.5);
[m, n] = size(imageinput(:,:,1));
origimage = imageinput;
a = newboundary(1);
b = newboundary(2);
c = newboundary(3);
d = newboundary(4);
imageinput(:,1:1:a,:) = 0;
imageinput(1:1:b,:,:) = 0;
imageinput(:,a+c:1:end,:) = 0;
imageinput(b+d:1:end,:,:) = 0;
bigR = double(imageinput(:,:,1)) + 0.00001;
bigG = double(imageinput(:,:,2)) + 0.00001;
bigB = double(imageinput(:,:,3)) + 0.00001;
bigI = bigR + bigG + bigB;
bigr = bigR ./ bigI;
bigg = bigG ./ bigI;
bigrint = round(bigr*(BINS-1) + 1);
biggint = round(bigg*(BINS-1) + 1);
newimage = zeros(m,n);
size(bigrint);
size(biggint);
for k=b:1:b+d
for j=a:1:a+c
newimage(k,j) = rg_histo(bigrint(k,j), biggint(k,j));
end
end
BW = bwareaopen(logical(newimage), 700, 26);
BW = imfill(BW, 'holes');
[newbox newboundary] = computeboundbox(BW)
imshow(origimage);
rectangle('Position', newbox, 'EdgeColor', 'yellow');
new_video(i) = getframe;
end
movie2avi(new_video, outputname);
endtime = cputime - start;
endtime
%%
function [statsboundbox newbox] = computeboundbox(image)
[m n] = size(image);
BW = bwareaopen(image, 500, 4);
BW = imfill(BW, 'holes');
statsboundbox1 = regionprops(double(BW), 'BoundingBox');
statsboundbox = statsboundbox1.BoundingBox;
newbox = zeros(4,1);
pixelincrease = 15;
if (statsboundbox(1) - pixelincrease) >= 1
newbox(1) = round(statsboundbox(1) -pixelincrease);
else
newbox(1) = 1;
end
if (statsboundbox(2) - pixelincrease) >= 1
newbox(2) = round(statsboundbox(2) - pixelincrease);
else
newbox(2) = 1;
end
if (statsboundbox(3) + pixelincrease*2) <= n
newbox(3) = statsboundbox(3) + pixelincrease*2;
else
newbox(3) = n;
end
if (statsboundbox(4) + pixelincrease*2) <= m
newbox(4) = statsboundbox(4) + pixelincrease*2;
else
newbox(4) = m;
end
newbox = newbox'
function [locusmatrix] = colorlocus
BINS = 32;
locusmatrix = zeros(BINS,BINS);
colorpatch = input('Enter filename of colopatch, press ''N'' to end. ', 's');
while (colorpatch ~= 'N')
% Nonparametric Segmentation
image12 = imread(colorpatch);
imageR = double(image12(:,:,1)) + 0.00001;
imageG = double(image12(:,:,2)) + 0.00001;
imageB = double(image12(:,:,3)) + 0.00001;
imageI = imageR + imageG + imageB;
r = imageR ./ imageI;
g = imageG ./ imageI;
BINS = 32;
rint = round(r*(BINS-1) + 1);
gint = round(g*(BINS-1) + 1);
colors = gint(:) + (rint(:)-1)*BINS;
hist = zeros(BINS,BINS);
for row = 1:BINS
for col = 1:(BINS-row+1)
hist(row,col) = length(find(colors==(((col + (row-1)*BINS)))));
end
end
locusmatrix = locusmatrix + hist';
colorpatch = input('Enter filename of colorpatch, press ''N'' to end. ', 's');
end
figure, contour(locusmatrix>0)
hold
x = 0:1:BINS+1;
y = BINS+1:-1:0;
line(x,y);
title('RG Histogram of Color of Interest');
xlabel('R axis');
ylabel('G axis');
locusmatrix = locusmatrix > 0;
I give myself a grade of 10 in this activity for successfully implementing the algorithm together with recommended optimizations.
References:
[1] "Adaptive skin color modeling using the skin locus for selecting training pixels", Soriano, et al, Pattern Recognition 36 (2003) 681–690
[2] "Activity 12: Color Image Segmentation", Orly Tarun, http://otarun.blogspot.com/2009/08/activity-12-color-image-segementation.html
[1] "Adaptive skin color modeling using the skin locus for selecting training pixels", Soriano, et al, Pattern Recognition 36 (2003) 681–690
[2] "Activity 12: Color Image Segmentation", Orly Tarun, http://otarun.blogspot.com/2009/08/activity-12-color-image-segementation.html
No comments:
Post a Comment