CS7380 F10: HW2

DUE: 11:45am, Friday October 29

Worth: 5% of your final grade

The course policies page outlines academic honesty as applied to this course, including policies for working with other students on assignments. It also gives the rules for on-time homework submission.

Goals

Assignment

There are three problems. Grading and turn-in details are given below.

Note: you will likely want to use the same installation of OpenCV, and the same camera, for the next assignment (HW3).

  1. In this problem you will install and test OpenCV. OpenCV is a popular free open-source library for a variety of computer vision tasks. It has a focus on providing optimized implementations of real-time algorithms (i.e. algorithms intended to run on every frame of a live video stream coming in at 10 or more frames per second), but it also includes some off-line algorithms. It is under very active development under direction of its founders Gary Bradsky and others, many of whom are now employed by Willow Garage.

    OpenCV is developed primarily in C/C++, but bindings also exist that allow you to use it from a variety of other programming environments including Java, Python, and others (more below). OpenCV is evolving quickly; at the time of this writing the most recent stable release is version 2.1.0. In principle it is relatively portable across processor architectures and operating systems. In practice, it contains many optimized algorithms that will only be enabled on Intel processors. It works well on most modern GNU/Linux and Microsoft Windows distributions, and nearly as well on Macintosh OS X.

    Based on student input, for this assignment we will focus on supporting OpenCV on modern Intel-based Windows, GNU/Linux, and OS X systems, with application programming in C, C++, or Java.

    1. Familiarize yourself with the OpenCV documentation, including the official wiki and the C API documentation. (Note: there is also documentation for the C++ and Python APIs, which are included with OpenCV. Those mirror the C API, which is the most mature, and which corresponds to the code listings in the book Learning OpenCV. All the example code we provide below is based on the C (not C++) API, even the C++ example code. The Java example uses the third-party JavaCV binding library.).
    2. Decide on (i) one or more computers and (ii) one or more cameras you will use for the rest of this homework. A large number of common USB webcams work with OpenCV, however, the actual compatibility depends on the combination of camera and host platform you intend to use. If in doubt, or if you simply do not have any camera available, talk with the course staff. We will be able to lend out a limited number of cameras for this assignment. Also, if you work with the course staff and we are unable to get a camera to work for you, you may complete the assignment using the provided video files only.
    3. Install OpenCV 2.1.0 on the computer(s) you identified in 1.b. See below for details.
    4. Test your OpenCV installation, and its compatibility with your camera, by (compiling, if necessary, and) running the C++ cvdemo program we provide below. If it doesn’t work, (i) contact the course staff and (ii) go back to step 1.b.
    5. Decide on the programming environment you want to use to complete problems 2 and 3 of the homework assignment. There are a variety of possible choices, but based on student feedback we will mainly focus on supporting C, C++, and Java. Install any necessary language binding libraries (more info below), and test everything either by running one of the language-specific demo programs we provide below or by developing your own, if necessary. You must be able to capture at least 10 frames per second (in cvdemo hit g with input focus on the video window to get debug output for that frame including FPS) at a minimum of (or, if you worked with the course staff and were not able to get a camera running, you must be able to capture frames from the provided video files) and display them on screen. If you develop your own test program for this step, please include it with your hand-in.
  2. In this problem you will calibrate your camera. Print out this chessboard calibration target on a standard 8.5 by 11 inch sheet of paper. This chessboard has 10 corners horizontally and 7 vertically (these counts do not include the outermost corners, nor should they). The pitch (side length of each square) is approximately 1 inch (*).

    (*) The actual pitch will depend on whether the method you use to physically print it involves any scaling, as is common. In fact the precise value of the pitch does not matter too much. Many routines simply define the pitch as “1 physical unit”, and all the resulting data is in those units. For example, this assumption is made by the calibration program unless you use the “-s” flag. For our chessboard, this assumption is just fine and will give data in physical units of approximately one inch.

    1. Acquire 10 clear images of the chessboard from a variety of viewpoints (e.g. using cvdemo or the equivalent in your programming environment). In each image the entire chessboard must be visible, and should take up at least about 1/3 of the frame. Be sure the chessboard remains planar, either by placing it flat on a table or by pasting it to a flat board (the course staff may be able to provide some foamcore boards if necessary). (Note: On OS X only, there seems to be a bug in the calibration program that prevents it from reading input images from a list. If this affects you, then just use the live capture mode of the calibration program, and skip this step of image acquisition.)

    2. Use the calibration program (which comes as a sample with OpenCV, a copy is provided below) to process those images and generate a calibration file in YAML format (similar in concept to XML, but fewer < and >). This calibration file is readable both by human and machine. The important values are the coordinates of the principal point (the actual intersection of the optical axis of the lens system with the pixel array), the horizontal and vertical focal lengths , and the first five distortion parameters .

      are the camera intrinsics and are listed in the YAML file as the data portion of the camera_matrix. This is actually presented in YAML as the 9 data values from the matrix

      reading left to right across the top row, then across the middle row, and finally across the bottom row. (Note: many common cameras actually have square pixels, so it should not be surprising for and to be approximately equal. If one is within about 10% of the other, it’s safe to assume they are actually the same for your camera, and you may get better results by re-running calibration with the flag “-a 1”.)

      are the radial and tangential distortion model coefficients and are listed as the data portion of the distortion_coefficients in the YAML file.

      In the README for your hand-in, state (i) the make and model of the camera you’re using, (i) its horizontal and vertical resolution in pixels, and (iii) the values you got for , and the four distortion parameters . Please give each value in scientific notation with four digits of precision after the decimal point.

    3. Now we’ll do some experiments to check whether the calibration data we got is believable. The four (or three, if you set ) camera intrinsics are all measured in pixels. Given the horizontal and vertical dimensions of your camera images, what value of the point would you expect for an ideally constructed camera? By what percent do your measured values for these two quantities differ from the expected values? Use the formula . Put the answer in your README.

      To check the focal length(s) we will calculate the horizontal and vertical fields of view (FoV) of your camera. These are defined as the angle subtended by the horizontal (resp vertical) extents of the pixel grid in light rays that intersect at the center of projection (focal point). Given that we have the distance from the pixel grid to the center of projection measured in pixels ( resp. ), calculate the horizontal and vertical FoV of your camera in degrees, as a function , , and the horizontal and vertical resolution of the camera (for these calculations assume that and have the values you would expect if the camera was ideally constructed). Reasonable values for horizontal FoV for most common cameras should come out somewhere in the range of ; vertical FoV is typically somewhat smaller because there are typically fewer vertical pixels than horizontal, and pixels are typically square.

      Now we will make an independent physical measurement of the FoVs. You will need (a) either a measuring tape at least about 3m or 10ft long or a piece of string and a ruler, and (b) some way to place marks on a wall, such as with removable tape (or use chalk on a blackboard). If you cannot get the needed materials, talk to the course staff and we’ll help you do the experiment. Place two marks on the wall separated by a measured horizontal distance of about 2m (or 6ft). Using cvdemo, hold your camera so that it can see both marks and then back away from the wall until the marks are each at the absolute left and right side of the image. Measure the perpendicular distance from the camera to the wall. Then turn the camera and repeat the experiment, using the same , but now record the distance to the wall. Use , , and to calculate the actual horizontal and vertical FoV of the camera. Call these the “expected” values, and calculate the percent error relative to the values “observed” from the camera calibration above.

      Put all your FoV calculations and the numeric results in your README.

      The distortion parameters are a bit harder to understand intuitively. First, for most common webcams, they should be fairly low in magnitude, i.e., the absolute value of each will typically be less than one, and often less than 0.1. Second, we can qualitatively check that applying them to undistort the actual camera images results in straight lines looking straighter. If your camera did not have much distortion to start with then the effect may not be that impressive, but it’s useful at least to verify that it’s at least not worse than what you started with. Use the cvundistort program provided below to test undistortion. Aim the camera at a known-straight object in the scene. You may toggle undistortion on/off by hitting u. With undistortion turned on, pause the input by hitting the spacebar. Then save both the raw capture image and the undistorted image by hitting c and p, respectively. Include these images with your hand-in.

  3. In this problem you will create your first OpenCV program.

    1. Using either the provided cvdemo example code (unmodified) or your own code, get a baseline program running that connects to a camera and displays the live video on-screen. Aim for a resolution of and a speed of 10 frames per second. (In cvdemo, hit g with input focus on the video window to get debug output for that frame including FPS.)

    2. Modify the program to draw over the live video. For this assignment, the drawings do not have to relate at all to the content of the video. It just has to look like you are drawing on a “glass sheet” that happens to have a video playing underneath it. In particular, you must at least use the OpenCV API functions cvLine() and cvEllipse(), and you must draw at least one object each in red, green, and blue.

Installing OpenCV

The official instructions are generally to compile OpenCV from source, with the exception that precompiled binaries are provided for users of Microsoft Visual Studio (aka MSVC) on Windows.

For this assignment, please try to install the latest release version of OpenCV, version 2.1.0. The official download files for Windows are here and for Unix (including OS X) are here.

Binary Packages

Again, with the exception of MSVC users, the official downloads are source releases, and you will need to compile them.

It is often much faster, and sometimes less error prone, to install pre-compiled binaries than to rebuild a project from its source code. Your Mileage May Vary, but we have found the following unofficial binary downloads of OpenCV 2.1.0 to be at least somewhat useful.

Windows Binaries

For 32-bit Windows users (this should include the 32-bit compatibility layer of Windows 7 64-bit, but we have not tested that), we have found two viable binary downloads. The first is the official MSVC binary build (accept defaults but tell it to add the OpenCV PATH for all users). The second is this binary package (*) from the pyopencv project, which was built with the MinGW port of GNU C/C++ (gcc) (MinGW GUI installer here).

(*) Note: you will need a 7-Zip decompresser to unpack this file.

If you plan to code in C/C++ on windows, then you can try one of those downloads. Of course, you will also want to install the corresponding compiler, see below for details.

If you plan to code in Java on Windows, then in theory you do not need the C/C++ compiler, but in practice, you will need the run-time .dlls used by that compiler. In particular, you will need these .dlls just to run the pre-compiled executables of the example code, of which we provide two versions, one built with MSVC and one built with MinGW. For the former, install the appropriate MSVC 10 runtime for either x86 x64 or ia64 (unlikely unless you are running on some kind of server-class machine). For the latter (MinGW), the easiest solution we know of is to just install the MinGW system (follow instructions below through step 4).

If you plan to code in C/C++ on Windows, the two main options are either MSVC or MinGW. In general MSVC is not free, though CCIS students at NEU can get a license. Microsoft does offer a “free evaluation” version of Visual Studio 2010 (aka Visual Studio 10) called Visual C++ 2010 Express.

If you want to use MinGW, follow these steps:

1. unpack opencv210_mingw.7z (link above from pyopencv project) into c:\opencv210_mingw
2. add c:\opencv210_mingw\bin to the PATH environment variable
   e.g. on XP "control panel->system->advanced->environment variables->PATH->edit"
3. install current MinGW (GCC 4.5.0) using GUI tool at link above,
   use defaults but add C++ and MSYS Basic System
4. manually add C:\MinGW\bin to the PATH environment variable (see above)
5. from within the MinGW shell, you can build your own C++ program like this
   g++ -o foo foo.cpp other.cpp -I/c/opencv210_mingw/include/opencv \
   -L/c/opencv210_mingw/lib -lml210 -lcvaux210 -lhighgui210 -lcv210 -lcxcore210 
6. or, if you are coding in C only, substitute gcc for g++ in the command line
   (and if you want to save yourself a lot of typing and use make, see below)

Known bugs: video writing (as we do in the demo program cvrecord) seems very buggy or totally broken with either MinGW or MSVC. Reading video from file seems broken with MinGW but works with MSVC.

Linux Binaries

There are many Linux distributions, many of which would potentially require different binary packages. We mainly use Ubuntu 10.04. Binary packages of OpenCV 2.1.0 are available here, but we do not recommend them, as they were not compiled with libv4l, and seem to have serious issues reading from many common webcams. Please see the instructions to compile from source below.

OS X Binaries

We are not aware of any precompiled binaries of OpenCV 2.1.0 for OS X. Please see the instructions to compile from source below.

Windows Source

If you were unable to get OpenCV working with the binary downloads above, your next option is to build it from source. You will need cmake and MSVC (we used MSVC++ 2010 Express).

1. unzip OpenCV-2.1.0-win.zip, it will create the directory OpenCV-2.1.0
2. create another directory OpenCV-build
3. run cmake-gui (e.g. from the Start menu)
4. Browse Source... to OpenCV-2.1.0
5. Browse Build... to OpenCV-build
6. Configure
7. select Visual Studio 10, accept other defaults
8. wait until you get a window full of red highlight
9. accept all default build options but enable BUILD_EXAMPLES
10. Configure
11. Generate
12. close cmake-gui, open MSVC
13. open project OpenCV-build/ALL_BUILD.vxcproj
14. wait for MSVC to parse every file related to the project
15. right click on ALL_BUILD, select "Build"

Note: we spent some time trying to get OpenCV 2.1.0 itself to recompile with MinGW, and so far have not been successful (various build errors, compiler errors, and runtime crashes). It does work to just use the precompiled binaries from the pyopencv project, as described above.

Linux Source

For Linux, the official specific instructions to build OpenCV from source are here. However, for Ubuntu 10.04, we find the instructions here much better, with the caveat that you should

sudo apt-get install libv4l-dev

first.

OS X Source

For OS X, the official specific instructions to build OpenCV from source are here. You will need Apple’s free development environment xcode. One easy option that worked for us is to use the macports system:

1. install xcode
2. install macports
3. sudo port selfupdate
4. sudo port install opencv
5. sudo port install pkgconfig

Known bugs: video writing (as we do in the demo program cvrecord) seems very buggy, as does setting capture resolution with some cameras. C++ example programs often segfault on close. The Java example program does not accept keypresses from the GUI.

Language Bindings

OpenCV can be used from a variety of programming languages in addition to C/C++. It ships with bindings for python and octave (a free open-source system that is similar to Matlab). It is also possible to call it from Matlab, see here for some details. Several different bindings to Java are available as separate projects; the most complete seems to be JavaCV.

Based on student response, we are focusing on support only for C, C++, and Java for this assignment. If you will be using C or C++, there is nothing you need to do other than to install the OpenCV libraries and header files (which you did above) and to get your compiler and linker working with them.

If you are using Java,

  1. Read the documentation for JavaCV.

  2. Download Java Native Access 3.2.5 jna.jar.

  3. Download JavaCV 20100730 and unzip it (readme, wiki, source).

  4. Compile your code with the javacv.jar and jna.jar in the CLASSPATH. The makefile provided with the Java examples can help with this:

    make LIB_DIR=/whatever CvDemo.class
    
  5. If you experience crashes and are on a 32 bit (not 64 bit) architecture, it may help to install a build of OpenCV 2.1.0 with SSE optimizations turned off (see note in the JavaCV documentation). On Windows the MinGW precompiled binaries already have SSE off.

Example Code

We have prepared and tested a variety of sample programs. Some we have written from scratch, others are provided with OpenCV. These should be useful to you in three ways:

  1. You can study how we wrote them to help you understand how to structure your own code.

  2. You can use code from them directly in your own projects. In particular, if you are coding in C++ or Java then the corresponding implementation of CvBase should be useful as a base base class for the program you need to develop for problem 3 (and also for the next assignment, HW3). The C module cvbase should be similarly useful if you are coding in C. You must note in your README what, if any, of the provided code you used as part of your own code.

  3. For problems 1 and 2 you will need to capture images, calibrate your camera, and perform undistortion. The cvdemo and calibration programs (or some equivalent) will be needed for those tasks.

Building the Examples

First, Pre-built binaries of the example code are provided for some platforms. Note that these are generally dynamically linked against the OpenCV libraries (.dlls on Windows), so you must have correctly installed OpenCV for them to work. Links to the corresponding sourcecode are given below.

However, it’s instructive to know how to recompile the examples from the provided source code. You could also use the same approach to compile your own code, of course.

General instructions for compiling the C++ examples are given at the top of cvdemo.cpp, and similar instructions for the C examples are in cvdemo.c. Some small variation should work for most command line compilers (including MSVC’s cl, but see below).

You may also find the makefiles (C++, C, java) useful. Just type e.g.

make cvdemo

and it should do the right thing on Linux and OS X (provided you followed the instructions above).

On Windows you can

  1. install MinGW as described above (even if you are going to use MSVC to build your code, we need it just to get a standard make)
  2. install pkg-config by first installing the gtk+ runtime and then copy bin/pkg-config.exe from here to c:\MinGW\bin
  3. create the directory c:\MinGW\lib\pkgconfig and either copy opencv-mingw.pc or opencv-msvc.pc to c:\MinGW\lib\pkgconfig\opencv.pc
  4. if using MinGW use the command line

    make cvdemo
    

    in the MinGW Shell; or if using MSVC first add the line

    call "c:\Program Files\Microsoft Visual Studio 10.0\VC\vcvarsall.bat"
    

    at the top of the file c:\MinGW\msys\1.0\msys.bat, then (re-)start the MinGW Shell, and use the command line

    make CXX=cl CC=cl cvdemo
    

C++

Except for calibration.cpp, these are all derived from a common base class called CvBase which is provided in cvbase.hpp and cvbase.cpp.

cvdemo.cpp is a very simple executable demonstration program that shows the basic functionality of CvBase. You can use it to view frames from a camera or video file and to save snapshots to .png image files. Run it with the command line argument “-h” to get help on the other possible command line arguments. Hit h with input focus over the video window to get help on the available key commands in the GUI.

cvrecord.cpp is another demonstration program built on CvBase. It has much the same functionality as cvdemo, but also saves all the captured frames to an output video file.

cvundistort.cpp is yet another demonstration program built on CvBase. It has much the same functionality as cvdemo, but also allows the captured frames to be undistorted if a camera calibration file is available in the YAML format output from calibration. Note that at least two command line parameters are always required to run cvundistort: the first gives the OpenCV index of the camera to use (–1 to use the first available camera), and the second gives the name of (or path to) the calibration file.

calibration.cpp is the camera calibration program that comes with OpenCV 2.1.0. Run it with no command line arguments to get help.

C

cvbase is a C module that is a translation of the above CvBase C++ base class. It is provided in cvbase.h and cvbase.c. The C version of cvdemo gives an example of its use.

cvdemo.c is a version of the C++ cvdemo written in C, using the cvbase module.

lkdemo.c is the pyramid Lucas-Kanade demonstration program that ships as a sample with OpenCV 2.1.0. Up to one command line parameter may be specified, which is either the path to a video file to use as input or the OpenCV camera index. The default is to use camera 0. The available GUI key and mouse actions are printed to the console after the program starts.

Java

CvBase.java is a version of the C++ CvBase written in Java.

CvDemo.java is a version of the C++ cvdemo written in Java.

Example Data

If you cannot get a camera to reliably capture images into OpenCV, even after working with the course staff, you may use this example data in substitution for camera input. Several still images and a video sequence are provided. All data was acquired in resolution with a Logitech Webcam Pro 9000.

Turn In

We will consider all three problems to be “code”, so you may work with a partner. In fact, we encourage it. Follow the instructions on the assignments page to submit your work.

Grading

Out of 100 total possible points, 30 will be allocated to problem 1, 40 to problem 2, and 30 to problem 3.