cvCalibrateCamera2(objectPoints, imagePoints, pointCounts, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, flags)
which assumes a chessboard-style calibration object
objectPoints
- coordinates of points on the calibration object relative to a frame on the object (so this is determined by the design of the chessboard pattern)imagePoints
, and pointCounts
- observed pixel locations corresponding to all the object points, typically for multiple views of the object in different (unknown) posesimageSize
- size of the camera image in pixelscameraMatrix
- written on output to be the reconstructed
distCoeffs
- a vector of four or 5 additional camera intrinsic parameters that are also written on output, more info on them below (the number of coefficients calculated, 4 or 5, is determined by the size of the passed matrix into which they will be stored)rvecs
, tvecs
- optional paramters, default NULL. If non-null then these are storage space to return the calibration object pose in camera frame for each of the provided views. More info on these below.flags
- various options to control the calibration algorithm can be specified here.cvCalibrateCamera2()
can also estimate these four or five distortion parameters given the same input data about chessboard corners as above.
distCoeffs
parameter in the order
cvInitUndistortMap(intrinsics, distortions, mapx, mapy)
to create a fast lookup datastructure for performing this nonlinear warping for every frame.
intrinsics
, distortions
-
cvCalibrateCamera2()
.mapx
, mapy
- 32-bit floating point single-channel images of the same size (width and height in pixels) as the camera images. These are storage space for the generated lookup datastructures. Each pixel
cvRemap(src, dest, map1, map2, flags, fillval)
can then be used to perform the remapping for each frame.
src
, dest
- the raw camera image and the undistorted image, respectively. Same size/depth/channels.map1
,map2
- here the precomputed mapx
and mapy
datastructuresflags
,fillval
- allows selection of the pixel interpolation algorithm; fillval
is the value to use if the source pixel would have been outside the imageHere is a code snippet demonstrating the use of these APIs:
IplImage uimg = null, mapx = null, mapy = null;
protected IplImage process(IplImage frame){
if(uimg == null){
uimg = cvCreateImage(cvGetSize(frame), frame.depth(), frame.channels());
mapx = cvCreateImage(cvGetSize(frame), IPL_DEPTH_32F, 1);
mapy = cvCreateImage(cvGetSize(frame), IPL_DEPTH_32F, 1);
cvInitUndistortMap(intrinsics, distortions, mapx, mapy);
}
cvRemap(frame, uimg, mapx, mapy,
CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, cvScalarAll(0));
return uimg;
}
protected void release() {
if (uimg != null) { uimg.release(); uimg = null; }
if (mapx != null) { mapx.release(); mapx = null; }
if (mapy != null) { mapy.release(); mapy = null; }
super.release();
}
TBD figures
rvecs
, tvecs
outputs from cvCalibrateCamera2()
can be used to recover
tvec
rvec
cvRodrigues2(src, dest, jacobian)
implements
src
,dest
- either
jacobian
- if non-null then the Jacobian of the transformation is stored herecvCalibrateCamera2()
computes these extrinsic parameters using a method called homography estimationsolvePnP(objectPoints, imagePoints, intrinsics, distortions, rvec, tvec)
rvec
, tvec
pair for this chessboard as reported by either cvCalibrateCamera2()
or solvePnP()
and use these to construct