Research and Develpment

New technology and software, the research of the algorithm and development are done in this page.
Development is advanced NOT to conflict with the patent of the algorithm.

---- New development, Image compression technologies ----
KIFF ... Block DCT compressor.
CHP! ... Lossless block compressor.
FVQC ... Fractals VQ compressor.

Kt Interchange File Format
Kt Interchange File Format(KIFF) is developped by KiTa laboratory.
It is a superior image compression technology.
KIFF can keep quality of the degree (by the case, any further) that it is the same as JPEG.
And KIFF can be compressed more than JPEG in about half of the data size.
RGB-YUV translation, block encoding, two dimensions DCT and optimized huffman...
KIFF combines how to compress it, and it is superior image compression technology
that compact data size is provided with good Quality.

Compression algorithm of KIFF
Step.1 RGB->YUV transformation.
The start that the amount of element is reduced by transforming data into YUV from
the RGB image. This work gives the compression rate of the method done specially after
the next step a big influence. This step can be skipped in the option, too.
When this step is skipped, Compression and Decompression will be faster.
But, compression efficiency will get bad.

Step.2 Block encoding.
Quantization in accordance with Q-matrix set up in advance in the block unit of 8x8.
It is added, and it is possible that the quality of the image and a compression rate
are adjusted by value to specify from the application as for the coefficient.
The file size outputted by this work, and the quality of the image are almost decided.
It improves compression efficiency by zig-zag scanning in the block.

Step.3 Two dimensions DCT.
Two-dimensional DCT is given to each of every blocks of (R-G-B or Y-U-V) plainly.
Signal on the high frequency element is restrained.

Step.4 run-length and huffman compression.
Bits are compressed by optimized run-length huffman for KIFF.

Decompression algorithm of KIFF
The commentary of decompression is skipped. Because, it simply traces the reverse
direction of the method of the compression algorithm.

KIFF Header
[KIFF](4) Signature('KIFF')
[*]   (1) Mode(RGB=1,YUV=0)
[*]   (1) Q-Matrix(KIFF original=0,JPEG standard=1,for CG&Animation=2)
[*]   (1) Q-Factor(0~15)
[*]   (1) reserved(0)
[****](4) Horizon image size
[****](4) Vertical image size
[....](-) Compressed data

You will notice that it goes through the process that KIFF looks like JPEG and it's being
encoded. But, there is a point which is different from JPEG in some steps in KIFF.
First, it is calculated to conform to the KIFF image that quantum coefficient matrix is
outputted. This improves encoding efficiency and the quality of the image.
Then, arithmetic coding part by huffman and run-length is optimized, too, and it can
expect a higher compression rate than JPEG.
JPEG is used sometimes down sampling (4:2:2) of the color difference element.
But, KIFF is not used down sampling of the color difference element.

The feature of KIFF.
KIFF keeps quality of the degree that it is about the same as JPEG, and KIFF can
compression in smaller amount of signal than JPEG amount.
This can be said as the compression technology whose efficiency is very good.
The secret is in two-step quantization step optimized for KIFF, and arithmetic coding.
When it is encoded, a user responds to the use, and KIFF can adjust a setup of a parameter.

The weakness of KIFF
KIFF algorithm is the lossy compression.
KIFF will be never returned perfectly original image because it is compressed with
quantization. Block noise is made in the whole of the image from the encoding which block
compression and DCT were used for again, and mosquito noise occurs at the outline
of the image. But, these can be hardly distinguished by adjusting a quantum coefficient.

Is KIFF Interchange File Format?
Though KIFF is the abbreviation of "Kt Interchange File Format", there is actually no IFF
of what interchangeability as well. Though it was the plan that it complied with the IFF form
at the time of the development, the acceptance of the result IFF form in consideration of
the data size and the use purpose was stopped halfway.
There was some thought that let's change a name before release because it was misleading.
But, KIFF was the name which we got accustomed to during the research which lasted
for a long time and which had gotten acquainted. Therefore the name of KIFF was not
changed. KIFF is no IFF.

Intel MMX Technology, Streaming SIMD Extensions (SSE)
A processor which has MMX or SSE will automatically select the special KIFF codec.
It decreases the CPU load if your processor is MMX Pentium or Pentium3.

Download KIFF library and sample source code.
version 0.08
WinCE(H/PC 2.0 MIPS)
version 0.07
WinCE(H/PC 2.0 SH3)
version 0.07
WinCE(H/PC 2.0 x86emu)
version 0.07
WinCE(P/PC 2.01 MIPS)
version 0.07
WinCE(P/PC 2.01 SH3)
version 0.07
WinCE(P/PC 2.01 x86emu)
version 0.07
version 0.07
(*)H/PC = Handheld PC, P/PC = Palm-size PC

About the use of KIFF library
Patent FREE.
We are not gonna insist on the copyright by your using this KIFF library.
For your paint tools, for texture of your game, in skin, etc...
Freely use KIFF which is this wonderful technology!!

CH compressor for Picture!
CHP is the image compression technology of lossless optimized specially
for two-dimensional CG, animation.

Compression process of CHP
Step.1 Block search and transformation
Image data are divided into the block of the little rectangle, and block data
are relocated so that it may be compressed efficiently with run-length and huffman
which does it in the next step.
And CHP allocate the memory size and the block size which it is automatically
the most suitable for in accordance with the characteristics of the image data.

Step.2 Run-length and huffman compression
The block data which was transformed by previous step are compressed by optimized
run-length and huffman for the CHP image.

Decompression process of CHP
Step.1 Run-length and huffman decompression
Fundamentally, it's only to follow the reverse direction of the method explained
with the compression algorithm.
CHP decompress run-length and huffman in the first step.

Step.2 Block transformation
The block data transformed at the time of the compression are re-modified to the
pattern which was so.
Decompression speed is compared at the time of the compression, and it is high speed
because it doesn't need to look up the image block of the same pattern like the time
of the compression.

CHP Header
[CHP!](4) Signature('CHP!')
[**]  (2) Offset for bitmap
[*]   (1) Block size
[*]   (1) Color depth
[**]  (2) Horizon of image
[**]  (2) Vertical of image
[....](-) compressed data

The feature of CHP
CHP is different from the lossy compression algorithm of the JPEG and so on,
because CHP adopts an lossless form.
Therefore, the image data which is compressed will be completely restored in the figure.
And, CHP is supports the image of 1,4,8,16,24 and 32bit.

The weakness of CHP
CHP does a block search frankly, so it needs time very much.
But, you will be get the result which you expected.

Download CHP library and sample source code.
version 0.01

Fractals Vector Quantization Compressor
Fractals Vector Quantization Compressor(FVQC) is the excellent graphics image format.
It get hints from Fractals and VQ compression technology.
FVQC can be compressed to even 1/8 ~ 1/100 with keeping high quality.

Compression algorithm of FVQC
Step.1 RGB->YUV transformation
Data are changed into YUV from the RGB image.
Refer to KIFF for this step.

Step.2 An approximate block and image pattern detection
An image is divided into little blocks of 8x8.
The block which approximated each is detected.
Furthermore, the block pattern data are subdivided, and a mirrored figure
is taken from the various angles.
And, average of a certain degree is taken.

Step.3 Vector quantization
First, the candidate of the block which becomes a base is looked up.
If a base block is decided, the one for the difference with the rest of blocks is

Step.4 Optimized huffman compression
The block data which was good transformed by previous step are compressed by
optimized huffman for the FVQC image.

Decompression algorithm of FVQC
It follows the reverse direction of the method explained simply with the compression
algorithm. But, the block pattern detection done with Step2 is skipped.
Also base block detection for vector quantization that it is done with Step3 is skipped.
Decompress of FVQC is very simple. Therefore, decompression is very fast than

FVQC Header
FVQC is still being developed.
Therefore, I can't open format to the public.

The characteristics of FVQC
FVQC is based on the VQ compression of adopting the hint of fractals algorithm.
FVQC realizes a very high compression rate.
Furthermore, high quality image is kept regardless of the picture, CG, the animate cartoon.
FVQC can be said as the ultimate compression format. And, FVQC decompressor is very
fast. Probably, You'll be surprised that it is the marvelous decompression speed.

The weakness of FVQC
FVQC is more powerful than other compression image formats toward the complex image.
But, when the delta element is too long, the whole of the image sometimes grows dim.
But!, it can be avoided by adjusting an optional Fractals VQ level parameter.

And, FVQC needs enormous time.
Because an image is analyzed from the various angles to realize high quality, a high
compression rate.
But, you will be surprised to get a higher result than the cost.

About the future of FVQC
FVQC is still being developed at present.
The development is continued every day with the aim of the quality improvement of the
image, the improvement of the compression speed.
And, I will apply this technology to the compression of the movie and the compression
of the sound. But I must say, it is still an armchair theory.
Therefore, your support is necessary for us.
Thank you.

Download FVQC program.
File types
version 0.00e

This software can't assure interchangeability with the next version.
In this test version, the width of bitmap and the height must be the multiples of 8.
For example, in such cases as 320x240.

[Lossy compression]
When an image is compressed, Lossy compression is the way of compressing it that
a loss causes. It is never returned perfectly when an image is uncompressed because
there is a loss. In most cases, it is made to increase the efficiency of the compression
rate, because the pigment of image which is difficult for the human being to distinguish
is down sampling.
Note: Conversely, That doesn't causes a loss is called "lossless compression".

[Block noise]
It is said as block encoding that an image is resolved into the rectangle that 4x4 and 8x8
are detailed and compressed. The consistency of the brightness can't be taken any more
between the block which adjoins it respectively when it is encoded by using this method,
and the block, and a borderline between the result block can be sometimes seen.
This is the block noise.

[Mosquito noise]
An image is resolved into the frequency element, and signal is shaved from the high
frequency element, and encoding by DCT is compressed. It cuts down the detailed
vibration pattern that it has the important meaning of the high frequency in the part of
the outline of the pigment boundary which therefore a high frequency element is
concentrated. It sometimes seems that the result becomes noise.
This is the mosquito noise.

IFF is the abbreviation of Interchange File Format. IFF is the standard to keep a format
with the interchangeability between each the platforms of PC.
Stored data are composed of the chunk form. Generally arrangement is Big-endian
with IFF. The character of IFF or IF is given behind the extention such as AIFF.

[VQ compression]
VQ is the abbreviation of Vector Quantization.
An image is divided in the detailed block, and the pattern which it has each decided as
is substituted, it is the way of reducing the amount of data and of compressing it.
We knows well that VQ compression technology is being used for texture compression
of NEC Power-VR chip.