Computer Science homework help

Computer Science homework help.

CSE 4360 / 5364 – Autonomous Robots Homework 2: Computer Vision

CSE 4360 / 5364 – Autonomous Robots

Homework 2- Fall 2020

Due Date: Dec. 1 2020 Problems marked with ∗ are mandatory only for students of CSE 5364 but will be graded for extra

credit for students of CSE 4360.

Edge Detection, Template Matching, and Blob Coloring

Basic feature extraction and region growing techniques are important steps in the first stages of im-

age processing. In this homework assignment you are going to implement edge detection, template

matching, and blob coloring as basic techniques.

Programming will follow the same procedure as in the last assignment. You are provided with a C

library containing facilities to display images which will link with the file process image.c. Your code

for the assignment should be written in this file in the function process image(image, size, proc img).

To start, you have download the appropriate code repository for your machine. This directory contains

the following files:

Imakefile This file is used to create a machine specific Makefile by typing xmkmf.

process image.c This is the file you have to edit in order to implement your visual routines.

lib/libCamera.a This library contains the graphical interface.

pictures A directory containing a number of example images.

The Image Processing Environment

To generate the vision program you have to type make. This will create the graphical image interface

Vision. This interface should look as follows:

2020 Manfred Huber Page 1



CSE 4360 / 5364 – Autonomous Robots Homework 2: Computer Vision

The graphical interface consists of two separate windows containing a set of buttons and the actual

image display window. The most important features for this assignment are the Load, Save, and

Process buttons. Load will open a small window to enter the name of an image file for processing.

Save permits to save a processed image to a file. There are some peculiarities to this interface. Do not

hit return after entering the name of a file. This will simply insert a new line into the filename. To

load (or save) the file you have to press the ok button in the filename window.

The image files used here consist of 2 integers indicating the width and height of the image in pixels

followed by the array of pixel values (each pixel is an unsigned char, i.e 0 – 255). A number of

image files can be found in the pictures directory. If you want you can also add your own pictures (a

description of how to convert images into this format is given below).

The Process button will call the function process image (your routine) and the resulting processed

image will be displayed. The original image is overwritten and the processed image can be saved

using the Save button.

Generating your own Images (Only in Linux Version)

A number of test images are provided in the pictures subdirectory. However, you can also add your

own images. For this purpose, a conversion program that runs on Linux (including

is provided. The program xv cam allows you to transform a large number of image formats into the

format used here (LPR format). To do this, just load your favorite image and save it in the LPR format.

The only fact to consider is that the image interface will not use anything beyond a 512×512 image

resolution (the rest of the image will be truncated when loading it into the Vision program.

In the same way a processed image can be converted into your favorite image format (such as GIF or


The Assignment

In this assignment you are to implement three basic vision processing techniques. To implement

these you have to write the required routines as the function process image(). This function has the

following structure:

void process_image(image, size, proc_img)

unsigned char image[DIM][DIM];

int size[2];

unsigned char proc_img[DIM][DIM];



The parameters of this function are image which is the original image, size which is the width and

height of this image, and proc img which is the image resulting from your processing. If this resulting

image does not have the same dimensions as the original image, then you have to update size to the

dimensions of the new image.

2020 Manfred Huber Page 2



CSE 4360 / 5364 – Autonomous Robots Homework 2: Computer Vision

1. Implement Edge Detection Using Sobel Templates

Edge detection is one of the most basic feature extraction techniques. One of the most common

techniques is using convolution with edge templates representing different directional edges

with Sobel templates among the most frequently used.

3×3 Sobel templates for vertical and horizontal edges are represented as:


−1 0 1

−2 0 2

−1 0 1

 , horizontal:

−1 −2 −1

0 0 0

1 2 1

For this part you are to implement edge detection using convolution with the two Sobel tem-

plates above. Your code, after computing the convolution, should display the resulting feature

map in the result image (you will have to normalize the convolution results to fall into the range

between 0 and 255). Edge detection with each of the templates should be performed separately.

You should hand in a short description, the code (submitted electronically), and printouts of

the images resulting from your processing (one for each of the two edge orientation templates)

applied to the Nedderman Hall and the chess board images, nedderman.lpr and chess.lpr.

2. Implement Template Matching Using Normalized Convolution

Implement normalized convolution for template matching. To do this you should rename the

process image.c file from the previous part of the assignment and write a new proc img func-

tion. Your routine has to permit specifying a region of the image which will subsequently be

used as the template (thus finding all instances of similar objects in the image). Using this tem-

plate you are to implement template matching using normalized cross correlation (convolution

which adjusts for the image and template means as well as for the image and template variances

– i.e. contrast). You can either use global or local image normalization. As in the previous part,

you should write the result of the convolution back into the result image, normalizing the values

to be between 0 and 255 (and of the correct data type).

You can select a portion of the image using the mouse. Pressing the left mouse button will

set one corner of a rectangle and releasing the button will set the opposite corner. The image

coordinates of this rectangle can be read from the variable roi, where roi.x and roi.y are the left

upper corner and roi.width and roi.height are the width and height of the selected rectangle.

Again, you should hand in a short description, the code (submitted electronically), and printouts

of the images resulting from your processing applied to the Nedderman Hall and the chess board

images, nedderman.lpr and chess.lpr.

3.∗ Implement Segmentation Using Blob Coloring

Here you have to implement blob coloring to identify regions with a common intensity in the

image. To do this you should again rename the process image.c file from the previous part of

the assignment and write a new proc img function which performs blob coloring. The result of

this operation should consist of a number of regions, each with its own, unique color (intensity).

Again you should hand in a short description, the code (submitted electronically), and the result

of your algorithm on the image blocks2.lpr from the pictures directory.

2020 Manfred Huber Page 3

Computer Science homework help


15% off for this assignment.

Our Prices Start at $11.99. As Our First Client, Use Coupon Code GET15 to claim 15% Discount This Month!!

100% Confidentiality

Information about customers is confidential and never disclosed to third parties.

Timely Delivery

No missed deadlines – 97% of assignments are completed in time.

Original Writing

We complete all papers from scratch. You can get a plagiarism report.

Money Back

If you are convinced that our writer has not followed your requirements, feel free to ask for a refund.

WhatsApp us for help!