int QccWAVtce3DEncode(const QccIMGImageCube *image, QccBitBuffer *buffer, int transform_type, int temporal_num_levels, int spatial_num_levels, double alpha, const QccWAVWavelet *wavelet, int target_bit_cnt);
int QccWAVtce3DDecodeHeader(QccBitBuffer *buffer, int *transform_type, int *temporal_num_levels, int *spatial_num_levels, int *num_frames, int *num_rows, int *num_cols, double *image_mean, int *max_coefficient_bits, double *alpha);
int QccWAVtce3DDecode(QccBitBuffer *buffer, QccIMGImageCube *image, int transform_type, int temporal_num_levels, int spatial_num_levels, double alpha, const QccWAVWavelet *wavelet, double image_mean, int max_coefficient_bits, int target_bit_bit);
QccWAVtce3DEncode() encodes an image cube, image, using a 3D generalization of the TCE algorithm. The original TCE algorithm was developed for 2D images by Tian and Hemami; it was latter extended to 3D by Zhang et al. The TCE (tarp coding using classification to achieve embedding) algorithm is based on the tarp algorithm (see QccWAVTarpEncode(3) ), but is designed to provide better rate-distortion performance when used in an embedded fashion; i.e., when decoding is performed at a rate less than that at which the bitstream was produced. See "ALGORITHM" below for more detail.
image is the image cube to be coded and buffer is the output bitstream. buffer must be of QCCBITBUFFER_OUTPUT type and opened via a prior call to QccBitBufferStart(3) .
QccWAVtce3DEncode() supports the use of both wavelet-packet and dyadic wavelet-transform decompositions. If transform_type is QCCWAVSUBBANDPYRAMID3D_DYADIC, a dyadic DWT is used; if transform_type is QCCWAVSUBBANDPYRAMID3D_PACKET, a wavelet-packet DWT is used. temporal_num_levels and spatial_num_levels give the number of levels of wavelet decomposition to perform for both transform types; for a dyadic transform, temporal_num_levels should equal spatial_num_levels. wavelet is the wavelet to use for decomposition.
The 3D-TCE algorithm performance is in part through the parameter alpha, a value that gives the learning rate of the density-estimation process implemented by the tarp filter used in one of the coding passes of the TCE algorithm.
The bitstream output from the 3D-TCE encoder is embedded, meaning that any prefix of the bitstream can be decoded to give a valid representation of the image. The 3D-TCE encoder essentially produces output bits until the number of bits output reaches target_bit_cnt, the desired (target) total length of the output bitstream in bits, and then it stops. Note that this is the bitstream length in bits, not the rate of the bitstream (which would be expressed in bits per voxel).
QccWAVtce3DDecodeHeader() decodes the header information in a bitstream previously produced by QccWAVtce3DEncode(). The input bitstream is buffer which must be of QCCBITBUFFER_INPUT type and opened via a prior call to QccBitBufferStart(3) .
The header information is returned in transform_type (either QCCWAVSUBBANDPYRAMID3D_DYADIC or QCCWAVSUBBANDPYRAMID3D_PACKET to indicate a dyadic or wavelet-packet transform decomposition, respectively), temporal_num_levels (number of levels of wavelet decomposition in the temporal direction), spatial_num_levels (number of levels of wavelet decomposition in the spatial directions), num_frames (size of the image cube in the temporal direction), num_rows (vertical size of image cube), num_cols (horizontal size of image cube), image_mean (the mean value of the original image cube), max_coefficient_bits (indicates the precision, in number of bits, of the wavelet coefficient with the largest magnitude), and alpha (the value of the learning rate).
QccWAVtce3DDecode() decodes the bitstream buffer, producing the reconstructed image cube, image. The bitstream must already have had its header read by a prior call to QccWAVtce3DDecodeHeader() (i.e., you call QccWAVtce3DDecodeHeader() first and then QccWAVtce3DDecode()). If target_bit_cnt is QCCENT_ANYNUMBITS, then decoding stops when the end of the input bitstream is reached; otherwise, decoding stops when target_num_bits from the input bitstream have been decoded.
In the TCE algorithm, nonzero-parent coefficients and run coefficients from the fractional-bitplane approach of Ordentlich et al. are combined to form a single class, the zero-run coefficients. For each bitplane, the TCE system then performs three passes: 1) adaptive arithmetic coding of bits of nonzero-neighbor coefficients; 2) tarp filtering and non-adaptive arithmetic coding of zero-run coefficients; and 3) encoding of refinement bits with an adaptive arithmetic coder. The sign bits of coefficients are coded when needed with a probability of 0.5. The encoding/decoding ends when the target rate is reached. Tian and Hemami describe several approaches to improving the probability estimate of the tarp filtering of the second pass.
C. Tian and S. S. Hemami, "An Embedded Image Coding System Based on Tarp Filter with Classification," in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Montreal, Quebec, Canada, May 2004, vol. 3, pp. 49-52.
J. Zhang, J. E. Fowler, and G. Liu, "Lossy-to-Lossless Compression of Hyperspectral Imagery Using 3D-TCE and an Integer KLT," IEEE Geoscience and Remote Sensing Letters, vol. 5, pp. 814-818, October 2008.
E. Ordentlich, M. Weinberger, and G. Seroussi, "A Low-Complexity Modeling Approach for Embedded Coding of Wavelet Coefficients," in Proceedings of the IEEE Data Compression Conference, Snowbird, UT, March 2002, pp. 23-32.