<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>lz4 1.7.2 Manual</title>
</head>
<body>
<h1>lz4 1.7.2 Manual</h1>
<hr>
<a name="Contents"></a><h2>Contents</h2>
<ol>
<li><a href="#Chapter1">Introduction</a></li>
<li><a href="#Chapter2">Tuning parameter</a></li>
<li><a href="#Chapter3">Private definitions</a></li>
<li><a href="#Chapter4">Simple Functions</a></li>
<li><a href="#Chapter5">Advanced Functions</a></li>
<li><a href="#Chapter6">Streaming Compression Functions</a></li>
<li><a href="#Chapter7">Streaming Decompression Functions</a></li>
</ol>
<hr>
<a name="Chapter1"></a><h2>Introduction</h2><pre>
  LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core,
  scalable with multi-cores CPU. It features an extremely fast decoder, with speed in
  multiple GB/s per core, typically reaching RAM speed limits on multi-core systems.

  The LZ4 compression library provides in-memory compression and decompression functions.
  Compression can be done in:
    - a single step (described as Simple Functions)
    - a single step, reusing a context (described in Advanced Functions)
    - unbounded multiple steps (described as Streaming compression)

  lz4.h provides block compression functions. It gives full buffer control to user.
  Block compression functions are not-enough to send information,
  since it's still necessary to provide metadata (such as compressed size),
  and each application can do it in whichever way it wants.
  For interoperability, there is LZ4 frame specification (doc/lz4_Frame_format.md).
  A library is provided to take care of it, see lz4frame.h.
<BR></pre>

<h3>Version</h3><pre><b>int LZ4_versionNumber (void);
const char* LZ4_versionString (void);
</b></pre><BR>
<a name="Chapter2"></a><h2>Tuning parameter</h2><pre></pre>

<pre><b>#define LZ4_MEMORY_USAGE 14
</b><p> Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.)
 Increasing memory usage improves compression ratio
 Reduced memory usage can improve speed, due to cache effect
 Default value is 14, for 16KB, which nicely fits into Intel x86 L1 cache
 
</p></pre><BR>

<a name="Chapter3"></a><h2>Private definitions</h2><pre>
 Do not use these definitions.
 They are exposed to allow static allocation of `LZ4_stream_t` and `LZ4_streamDecode_t`.
 If you use these definitions in your code, it will break when you upgrade LZ4 to a new version.
<BR></pre>

<pre><b>typedef struct {
    uint32_t hashTable[LZ4_HASH_SIZE_U32];
    uint32_t currentOffset;
    uint32_t initCheck;
    const uint8_t* dictionary;
    uint8_t* bufferStart;   </b>/* obsolete, used for slideInputBuffer */<b>
    uint32_t dictSize;
} LZ4_stream_t_internal;
</b></pre><BR>
<pre><b>typedef struct {
    const uint8_t* externalDict;
    size_t extDictSize;
    const uint8_t* prefixEnd;
    size_t prefixSize;
} LZ4_streamDecode_t_internal;
</b></pre><BR>
<pre><b>typedef struct {
    unsigned int hashTable[LZ4_HASH_SIZE_U32];
    unsigned int currentOffset;
    unsigned int initCheck;
    const unsigned char* dictionary;
    unsigned char* bufferStart;   </b>/* obsolete, used for slideInputBuffer */<b>
    unsigned int dictSize;
} LZ4_stream_t_internal;
</b></pre><BR>
<pre><b>typedef struct {
    const unsigned char* externalDict;
    size_t extDictSize;
    const unsigned char* prefixEnd;
    size_t prefixSize;
} LZ4_streamDecode_t_internal;
</b></pre><BR>
<a name="Chapter4"></a><h2>Simple Functions</h2><pre></pre>

<pre><b>int LZ4_compress_default(const char* source, char* dest, int sourceSize, int maxDestSize);
</b><p>    Compresses 'sourceSize' bytes from buffer 'source'
    into already allocated 'dest' buffer of size 'maxDestSize'.
    Compression is guaranteed to succeed if 'maxDestSize' >= LZ4_compressBound(sourceSize).
    It also runs faster, so it's a recommended setting.
    If the function cannot compress 'source' into a more limited 'dest' budget,
    compression stops *immediately*, and the function result is zero.
    As a consequence, 'dest' content is not valid.
    This function never writes outside 'dest' buffer, nor read outside 'source' buffer.
        sourceSize  : Max supported value is LZ4_MAX_INPUT_VALUE
        maxDestSize : full or partial size of buffer 'dest' (which must be already allocated)
        return : the number of bytes written into buffer 'dest' (necessarily <= maxOutputSize)
              or 0 if compression fails 
</p></pre><BR>

<pre><b>int LZ4_decompress_safe (const char* source, char* dest, int compressedSize, int maxDecompressedSize);
</b><p>    compressedSize : is the precise full size of the compressed block.
    maxDecompressedSize : is the size of destination buffer, which must be already allocated.
    return : the number of bytes decompressed into destination buffer (necessarily <= maxDecompressedSize)
             If destination buffer is not large enough, decoding will stop and output an error code (<0).
             If the source stream is detected malformed, the function will stop decoding and return a negative result.
             This function is protected against buffer overflow exploits, including malicious data packets.
             It never writes outside output buffer, nor reads outside input buffer.
</p></pre><BR>

<a name="Chapter5"></a><h2>Advanced Functions</h2><pre></pre>

<pre><b>int LZ4_compressBound(int inputSize);
</b><p>    Provides the maximum size that LZ4 compression may output in a "worst case" scenario (input data not compressible)
    This function is primarily useful for memory allocation purposes (destination buffer size).
    Macro LZ4_COMPRESSBOUND() is also provided for compilation-time evaluation (stack memory allocation for example).
    Note that LZ4_compress_default() compress faster when dest buffer size is >= LZ4_compressBound(srcSize)
        inputSize  : max supported value is LZ4_MAX_INPUT_SIZE
        return : maximum output size in a "worst case" scenario
              or 0, if input size is too large ( > LZ4_MAX_INPUT_SIZE)
</p></pre><BR>

<pre><b>int LZ4_compress_fast (const char* source, char* dest, int sourceSize, int maxDestSize, int acceleration);
</b><p>    Same as LZ4_compress_default(), but allows to select an "acceleration" factor.
    The larger the acceleration value, the faster the algorithm, but also the lesser the compression.
    It's a trade-off. It can be fine tuned, with each successive value providing roughly +~3% to speed.
    An acceleration value of "1" is the same as regular LZ4_compress_default()
    Values <= 0 will be replaced by ACCELERATION_DEFAULT (see lz4.c), which is 1.
</p></pre><BR>

<pre><b>int LZ4_sizeofState(void);
int LZ4_compress_fast_extState (void* state, const char* source, char* dest, int inputSize, int maxDestSize, int acceleration);
</b><p>    Same compression function, just using an externally allocated memory space to store compression state.
    Use LZ4_sizeofState() to know how much memory must be allocated,
    and allocate it on 8-bytes boundaries (using malloc() typically).
    Then, provide it as 'void* state' to compression function.
</p></pre><BR>

<pre><b>int LZ4_compress_destSize (const char* source, char* dest, int* sourceSizePtr, int targetDestSize);
</b><p>    Reverse the logic, by compressing as much data as possible from 'source' buffer
    into already allocated buffer 'dest' of size 'targetDestSize'.
    This function either compresses the entire 'source' content into 'dest' if it's large enough,
    or fill 'dest' buffer completely with as much data as possible from 'source'.
        *sourceSizePtr : will be modified to indicate how many bytes where read from 'source' to fill 'dest'.
                         New value is necessarily <= old value.
        return : Nb bytes written into 'dest' (necessarily <= targetDestSize)
              or 0 if compression fails
</p></pre><BR>

<pre><b>int LZ4_decompress_fast (const char* source, char* dest, int originalSize);
</b><p>    originalSize : is the original and therefore uncompressed size
    return : the number of bytes read from the source buffer (in other words, the compressed size)
             If the source stream is detected malformed, the function will stop decoding and return a negative result.
             Destination buffer must be already allocated. Its size must be a minimum of 'originalSize' bytes.
    note : This function fully respect memory boundaries for properly formed compressed data.
           It is a bit faster than LZ4_decompress_safe().
           However, it does not provide any protection against intentionally modified data stream (malicious input).
           Use this function in trusted environment only (data to decode comes from a trusted source).
</p></pre><BR>

<pre><b>int LZ4_decompress_safe_partial (const char* source, char* dest, int compressedSize, int targetOutputSize, int maxDecompressedSize);
</b><p>    This function decompress a compressed block of size 'compressedSize' at position 'source'
    into destination buffer 'dest' of size 'maxDecompressedSize'.
    The function tries to stop decompressing operation as soon as 'targetOutputSize' has been reached,
    reducing decompression time.
    return : the number of bytes decoded in the destination buffer (necessarily <= maxDecompressedSize)
       Note : this number can be < 'targetOutputSize' should the compressed block to decode be smaller.
             Always control how many bytes were decoded.
             If the source stream is detected malformed, the function will stop decoding and return a negative result.
             This function never writes outside of output buffer, and never reads outside of input buffer. It is therefore protected against malicious data packets
</p></pre><BR>

<a name="Chapter6"></a><h2>Streaming Compression Functions</h2><pre></pre>

<pre><b>typedef struct {
  union {
    long long table[LZ4_STREAMSIZE_U64];
    LZ4_stream_t_internal internal_donotuse;
  };
} LZ4_stream_t;
</b><p> information structure to track an LZ4 stream.
 important : init this structure content before first use !
 note : only allocated directly the structure if you are statically linking LZ4
        If you are using liblz4 as a DLL, please use below construction methods instead.
 
</p></pre><BR>

<pre><b>void LZ4_resetStream (LZ4_stream_t* streamPtr);
</b><p>  Use this function to init an allocated `LZ4_stream_t` structure
 
</p></pre><BR>

<pre><b>LZ4_stream_t* LZ4_createStream(void);
int           LZ4_freeStream (LZ4_stream_t* streamPtr);
</b><p>  LZ4_createStream() will allocate and initialize an `LZ4_stream_t` structure.
  LZ4_freeStream() releases its memory.
  In the context of a DLL (liblz4), please use these methods rather than the static struct.
  They are more future proof, in case of a change of `LZ4_stream_t` size.
 
</p></pre><BR>

<pre><b>int LZ4_loadDict (LZ4_stream_t* streamPtr, const char* dictionary, int dictSize);
</b><p>  Use this function to load a static dictionary into LZ4_stream.
  Any previous data will be forgotten, only 'dictionary' will remain in memory.
  Loading a size of 0 is allowed.
  Return : dictionary size, in bytes (necessarily <= 64 KB)
 
</p></pre><BR>

<pre><b>int LZ4_compress_fast_continue (LZ4_stream_t* streamPtr, const char* src, char* dst, int srcSize, int maxDstSize, int acceleration);
</b><p>  Compress buffer content 'src', using data from previously compressed blocks as dictionary to improve compression ratio.
  Important : Previous data blocks are assumed to still be present and unmodified !
  'dst' buffer must be already allocated.
  If maxDstSize >= LZ4_compressBound(srcSize), compression is guaranteed to succeed, and runs faster.
  If not, and if compressed data cannot fit into 'dst' buffer size, compression stops, and function returns a zero.
 
</p></pre><BR>

<pre><b>int LZ4_saveDict (LZ4_stream_t* streamPtr, char* safeBuffer, int dictSize);
</b><p>  If previously compressed data block is not guaranteed to remain available at its memory location,
  save it into a safer place (char* safeBuffer).
  Note : you don't need to call LZ4_loadDict() afterwards,
         dictionary is immediately usable, you can therefore call LZ4_compress_fast_continue().
  Return : saved dictionary size in bytes (necessarily <= dictSize), or 0 if error.
 
</p></pre><BR>

<a name="Chapter7"></a><h2>Streaming Decompression Functions</h2><pre></pre>

<pre><b>typedef struct {
  union {
    unsigned long long table[LZ4_STREAMDECODESIZE_U64];
    LZ4_streamDecode_t_internal internal_donotuse;
  };
</b></pre><BR>
<pre><b>LZ4_streamDecode_t* LZ4_createStreamDecode(void);
int                 LZ4_freeStreamDecode (LZ4_streamDecode_t* LZ4_stream);
</b><p> information structure to track an LZ4 stream.
 init this structure content using LZ4_setStreamDecode or memset() before first use !

 In the context of a DLL (liblz4) please prefer usage of construction methods below.
 They are more future proof, in case of a change of LZ4_streamDecode_t size in the future.
 LZ4_createStreamDecode will allocate and initialize an LZ4_streamDecode_t structure
 LZ4_freeStreamDecode releases its memory.
 
</p></pre><BR>

<pre><b>int LZ4_setStreamDecode (LZ4_streamDecode_t* LZ4_streamDecode, const char* dictionary, int dictSize);
</b><p>  Use this function to instruct where to find the dictionary.
  Setting a size of 0 is allowed (same effect as reset).
  @return : 1 if OK, 0 if error
 
</p></pre><BR>

<pre><b>int LZ4_decompress_safe_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* source, char* dest, int compressedSize, int maxDecompressedSize);
int LZ4_decompress_fast_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* source, char* dest, int originalSize);
</b><p>    These decoding functions allow decompression of multiple blocks in "streaming" mode.
    Previously decoded blocks *must* remain available at the memory position where they were decoded (up to 64 KB)
    In the case of a ring buffers, decoding buffer must be either :
    - Exactly same size as encoding buffer, with same update rule (block boundaries at same positions)
      In which case, the decoding & encoding ring buffer can have any size, including very small ones ( < 64 KB).
    - Larger than encoding buffer, by a minimum of maxBlockSize more bytes.
      maxBlockSize is implementation dependent. It's the maximum size you intend to compress into a single block.
      In which case, encoding and decoding buffers do not need to be synchronized,
      and encoding ring buffer can have any size, including small ones ( < 64 KB).
    - _At least_ 64 KB + 8 bytes + maxBlockSize.
      In which case, encoding and decoding buffers do not need to be synchronized,
      and encoding ring buffer can have any size, including larger than decoding buffer.
    Whenever these conditions are not possible, save the last 64KB of decoded data into a safe buffer,
    and indicate where it is saved using LZ4_setStreamDecode()
</p></pre><BR>

<pre><b>int LZ4_decompress_safe_usingDict (const char* source, char* dest, int compressedSize, int maxDecompressedSize, const char* dictStart, int dictSize);
int LZ4_decompress_fast_usingDict (const char* source, char* dest, int originalSize, const char* dictStart, int dictSize);
</b><p>Advanced decoding functions :
    These decoding functions work the same as
    a combination of LZ4_setStreamDecode() followed by LZ4_decompress_x_continue()
    They are stand-alone. They don't need nor update an LZ4_streamDecode_t structure.
</p></pre><BR>

</html>
</body>