Table of Contents

NAME

QccVIDMeshMotionEstimationWarpMesh, QccVIDMeshMotionEstimationSearch, QccVIDMeshMotionEstimationCreateCompensatedFrame - routines for motion estimation and compensation using regular triangle meshes

SYNOPSIS

#include "libQccPack.h"

int QccVIDMeshMotionEstimationWarpMesh(const QccRegularMesh *reference_mesh, QccRegularMesh *current_mesh, const QccIMGImageComponent *motion_vectors_horizontal, const QccIMGImageComponent *motion_vectors_vertical);

int QccVIDMeshMotionEstimationSearch(const QccIMGImageComponent *current_frame, QccRegularMesh *current_mesh, const QccIMGImageComponent *reference_frame, const QccRegularMesh *reference_mesh, QccIMGImageComponent *motion_vectors_horizontal, QccIMGImageComponent *motion_vectors_vertical, int block_size, int window_size, int subpixel_accuracy, int constrained_boundary, int exponential_kernel);

int QccVIDMeshMotionEstimationCreateCompensatedFrame(QccIMGImageComponent *motion_compensated_frame, const QccRegularMesh *current_mesh, const QccIMGImageComponent *reference_frame, const QccRegularMesh *reference_mesh, int subpixel_accuracy);

DESCRIPTION

QccVIDMeshMotionEstimationSearch() and QccVIDMeshMotionEstimationCreateCompensatedFrame() perform motion estimation and compensation, respectively, between two video frames using a regular triangle mesh rather than square blocks as in the ubiquitous block-based motion estimation/compensation (i.e., see QccVIDMotionEstimationFullSearch(3) ). This regular triangle mesh is created by dividing the reference frame into square blocks and then splitting each block along its diagonal.

For motion estimation via QccVIDMeshMotionEstimationSearch(), the triangle vertices, or "control points," of the regular mesh are tracked from the reference frame to the current frame via a simple, block-based motion-estimation strategy due to Eckert et al. In this approach, motion into the current frame is estimated by centering a small block at each vertex in the reference-frame mesh and finding the best-matching block in the current frame.

Once motion is estimated in this manner and a field of motion vectors determined, QccVIDMeshMotionEstimationWarpMesh() can be used to create a motion-compensated version of the mesh in the current frame from the mesh in the reference frame. In essence, QccVIDMeshMotionEstimationWarpMesh() "warps" the reference-frame mesh into the current frame by adding to each vertex of the mesh its corresponding motion vector.

Once the motion-compensated mesh is available in the current frame, QccVIDMeshMotionEstimationCreateCompensatedFrame() can be used to perform motion compensation between the frames. That is, QccVIDMeshMotionEstimationCreateCompensatedFrame() uses affine transforms between the two meshes to construct a motion-compensated prediction of the current frame from the reference frame. This motion-compensated frame is constructed by creating, for each triangle in the reference-frame mesh, an affine transform that maps the triangle into the current-frame mesh. This affine transform is then used to map the pixels corresponding to the triangle in the reference frame into the current frame, with bilinear interpolation between the surrounding four integer-pixel locations used to resolve subpixel positions produced by the affine mapping.

Motion Estimation

QccVIDMeshMotionEstimationSearch() performs a motion-estimation search to produce a motion-vector field between reference_frame and current_frame. reference_mesh is the regular mesh in the reference frame, and QccVIDMeshMotionEstimationSearch() estimates the motion of the vertices of this mesh, producing a motion-vector field which is returned in motion_vectors_horizontal and motion_vectors_vertical. QccVIDMeshMotionEstimationSearch() calls QccVIDMeshMotionEstimationWarpMesh() (see below) to produce the corresponding motion-compensated mesh in the current frame, which is returned as current_mesh.

block_size gives the the size of the square block that is centered at each mesh vertex in order to determine the vertex motion, following the block-based vertex-motion estimation procedure outlined by Eckert et al.

window_size gives the size of the motion-estimation search window about the current vertex location.

subpixel_accuracy is one of QCCVID_ME_FULLPIXEL, QCCVID_ME_HALFPIXEL, QCCVID_ME_QUARTERPIXEL, or QCCVID_ME_EIGHTHPIXEL, indicating full-, half-, quarter-, or eighth-pixel accuracy. If anything other than integer-pixel accuracy is used, QccVIDMotionEstimationCreateReferenceFrame(3) must be called on both current_frame and reference_frame to interpolate them to the appropriate subpixel accuracy prior to calling QccVIDMeshMotionEstimationSearch().

If constrained_boundary is 1, QccVIDMeshMotionEstimationSearch() constrains all vertices that lie on the boundary of the reference frame to have zero-valued motion vectors. In doing so, the resulting current_mesh is guaranteed to cover the entire current_frame with no "gaps." If constrained_boundary is 0, no such guarantee is in place, and motion vectors for the image-boundary vertices can take on any value, perhaps moving into the interior of the image or beyond the bounds of the image. This latter, unconstrained approach may permit better motion estimation at the expense of some "gaps" possibly arising in the corresponding motion-compensated frame.

If exponential_kernel is 1, a exponential function is used to create a kernel for the block-based search process that estimates the motion of the mesh vertices. This exponential kernel provides greater weight to the pixels in the center of the block (i.e., corresponding to the vertex of interest itself) and exponentially decreasing weight to pixels distant from the center. If exponential_kernel is 0, all pixels in the block are weighted the same in the motion-estimation search. See Eckert et al. and Schroder and Mech.

Motion Compensation

QccVIDMeshMotionEstimationCreateCompensatedFrame() constructs the motion-compensated prediction of the current frame from reference_frame using affine transforms between the reference-frame mesh, reference_mesh, and the current-frame mesh, current_mesh. The motion-compensated frame is returned in motion_compensated_frame, which must be allocated prior to calling QccVIDMeshMotionEstimationCreateCompensatedFrame().

subpixel_accuracy is one of QCCVID_ME_FULLPIXEL, QCCVID_ME_HALFPIXEL, QCCVID_ME_QUARTERPIXEL, or QCCVID_ME_EIGHTHPIXEL, indicating full-, half-, quarter-, or eighth-pixel accuracy. If anything other than integer-pixel accuracy is used, QccVIDMotionEstimationCreateReferenceFrame(3) must be called on reference_frame to interpolate it to the appropriate subpixel accuracy prior to calling QccVIDMeshMotionEstimationCreateCompensatedFrame(). On the other hand, motion_compensated_frame must be the same size as the original current and reference frames in all cases (i.e., it is not interpolated to subpixel accuracy).

QccVIDMeshMotionEstimationCreateCompensatedFrame() uses QccTriangleCreateAffineTransform(3) to construct an affine transform between each pair of reference-frame and current-frame triangles, and QccPointAffineTransform(3) to map a pixel from the reference frame to the current frame.

Mesh Warping

QccVIDMeshMotionEstimationWarpMesh() constructs a mesh in the current frame, current_mesh, from a mesh in the reference frame, reference_mesh, by adding motion vectors to each vertex of reference_mesh. The motion vectors are specified by motion_vectors_horizontal and motion_vectors_vertical and are usually obtained via QccVIDMeshMotionEstimationSearch(). current_mesh must be allocated to the same size as reference_mesh prior to calling QccVIDMeshMotionEstimationWarpMesh().

RETURN VALUE

These routines return 0 on success, and 1 on failure.

SEE ALSO

QccVIDMotionVectorsEncode(3) , QccVIDMotionVectorsDecode(3) , QccRegularMesh(3) , mesh_memc(1) , QccPackVID(3) , QccPackENT(3) , QccPack(3)

Y. Altunbasak, A. M. Tekalp, and G. Bozdagi, "Two-Dimensional Object-based Coding Using a Content-based Mesh and Affine Motion Parameterization," in Proceedings of the International Conference on Image Processing, Washington, DC, October 1995, vol. 2, pp. 394-397.

M. Eckert, D. Ruiz, J. I. Ronda, and N. Garcia, "Evaluation of DWT and DCT for Irregular Mesh-based Motion Compensation in Predictive Video Coding," in Visual Communications and Image Processing, K. N. Ngan, T. Sikora, and M.-T. Sun, Eds., Proc. SPIE 4067, June 2000, pp. 447-456.

K. Schroder and R. Mech, "Combined Description of Shape and Motion in an Object Based Coding Scheme Using Curved Triangles," in Proceedings of the International Conference on Image Processing, Washington, DC, October 1995, vol. 2, pp. 390-393.

Y. Wang, S. Cui, and J. E. Fowler, "3D Video Coding Using Redundant-Wavelet Multihypothesis and Motion-Compensated Temporal Filtering," in Proceedings of the International Conference on Image Processing, Barcelona, Spain, September 2003, vol. 2, pp. 755-758.

Y. Wang, S. Cui, and J. E. Fowler, "3D Video Coding with Redundant-Wavelet Multihypothesis," IEEE Transactions on Circuits and Systems for Video Technology, submitted July 2003. Revised April 2004, March 2005.

AUTHOR

Copyright (C) 1997-2021 James E. Fowler


Table of Contents



Get QccPack at SourceForge.net. Fast, secure and Free Open Source software downloads