package lz4
Import Path
github.com/pierrec/lz4/v4 (on go.dev)
Dependency Relation
imports 10 packages, and imported by 3 packages
Involved Source Files
compressing_reader.go
Package lz4 implements reading and writing lz4 compressed data.
The package supports both the LZ4 stream format,
as specified in http://fastcompression.blogspot.fr/2013/04/lz4-streaming-format-final.html,
and the LZ4 block format, defined at
http://fastcompression.blogspot.fr/2011/05/lz4-explained.html.
See https://github.com/lz4/lz4 for the reference C implementation.
options.go
options_gen.go
reader.go
state.go
state_gen.go
writer.go
Code Examples
{
s := "hello world"
r := strings.NewReader(s)
pr, pw := io.Pipe()
zw := lz4.NewWriter(pw)
zr := lz4.NewReader(pr)
go func() {
_, _ = io.Copy(zw, r)
_ = zw.Close()
_ = pw.Close()
}()
_, _ = io.Copy(os.Stdout, zr)
}
{
s := "hello world"
data := []byte(strings.Repeat(s, 100))
buf := make([]byte, lz4.CompressBlockBound(len(data)))
var c lz4.Compressor
n, err := c.CompressBlock(data, buf)
if err != nil {
fmt.Println(err)
}
if n >= len(data) {
fmt.Printf("`%s` is not compressible", s)
}
buf = buf[:n]
out := make([]byte, 10*len(data))
n, err = lz4.UncompressBlock(buf, out)
if err != nil {
fmt.Println(err)
}
out = out[:n]
fmt.Println(string(out[:len(s)]))
}
Package-Level Type Names (total 8)
BlockSizeIndex defines the size of the blocks to be compressed.
( BlockSize) String() string
BlockSize : expvar.Var
BlockSize : fmt.Stringer
func BlockSizeOption(size BlockSize) Option
const Block1Mb
const Block256Kb
const Block4Mb
const Block64Kb
Apply applies useful options to the lz4 encoder.
Close simply invokes the underlying stream Close method. This method is
provided for the benefit of Go http client/server, which relies on Close
for goroutine termination.
Read allows reading of lz4 compressed data
Reset makes the stream usable again; mostly handy to reuse lz4 encoder
instances.
Source exposes the underlying source stream for introspection and control.
*CompressingReader : github.com/prometheus/common/expfmt.Closer
*CompressingReader : io.Closer
*CompressingReader : io.ReadCloser
*CompressingReader : io.Reader
func NewCompressingReader(src io.ReadCloser) *CompressingReader
CompressionLevel defines the level of compression to use. The higher the better, but slower, compression.
( CompressionLevel) String() string
CompressionLevel : expvar.Var
CompressionLevel : fmt.Stringer
func CompressBlockHC(src, dst []byte, depth CompressionLevel, _, _ []int) (int, error)
func CompressionLevelOption(level CompressionLevel) Option
const Fast
const Level1
const Level2
const Level3
const Level4
const Level5
const Level6
const Level7
const Level8
const Level9
const github.com/parquet-go/parquet-go/compress/lz4.DefaultLevel
const github.com/parquet-go/parquet-go/compress/lz4.Fast
const github.com/parquet-go/parquet-go/compress/lz4.Fastest
const github.com/parquet-go/parquet-go/compress/lz4.Level1
const github.com/parquet-go/parquet-go/compress/lz4.Level2
const github.com/parquet-go/parquet-go/compress/lz4.Level3
const github.com/parquet-go/parquet-go/compress/lz4.Level4
const github.com/parquet-go/parquet-go/compress/lz4.Level5
const github.com/parquet-go/parquet-go/compress/lz4.Level6
const github.com/parquet-go/parquet-go/compress/lz4.Level7
const github.com/parquet-go/parquet-go/compress/lz4.Level8
const github.com/parquet-go/parquet-go/compress/lz4.Level9
A Compressor compresses data into the LZ4 block format.
It uses a fast compression algorithm.
A Compressor is not safe for concurrent use by multiple goroutines.
Use a Writer to compress into the LZ4 stream format.
CompressBlock compresses the source buffer src into the destination dst.
If compression is successful, the first return value is the size of the
compressed data, which is always >0.
If dst has length at least CompressBlockBound(len(src)), compression always
succeeds. Otherwise, the first return value is zero. The error return is
non-nil if the compressed data does not fit in dst, but it might fit in a
larger buffer that is still smaller than CompressBlockBound(len(src)). The
return value (0, nil) means the data is likely incompressible and a buffer
of length CompressBlockBound(len(src)) should be passed in.
A CompressorHC compresses data into the LZ4 block format.
Its compression ratio is potentially better than that of a Compressor,
but it is also slower and requires more memory.
A Compressor is not safe for concurrent use by multiple goroutines.
Use a Writer to compress into the LZ4 stream format.
Level is the maximum search depth for compression.
Values <= 0 mean no maximum.
CompressBlock compresses the source buffer src into the destination dst.
If compression is successful, the first return value is the size of the
compressed data, which is always >0.
If dst has length at least CompressBlockBound(len(src)), compression always
succeeds. Otherwise, the first return value is zero. The error return is
non-nil if the compressed data does not fit in dst, but it might fit in a
larger buffer that is still smaller than CompressBlockBound(len(src)). The
return value (0, nil) means the data is likely incompressible and a buffer
of length CompressBlockBound(len(src)) should be passed in.
Option defines the parameters to setup an LZ4 Writer or Reader.
String returns a string representation of the option with its parameter(s).
Option : expvar.Var
Option : fmt.Stringer
func BlockChecksumOption(flag bool) Option
func BlockSizeOption(size BlockSize) Option
func ChecksumOption(flag bool) Option
func CompressionLevelOption(level CompressionLevel) Option
func ConcurrencyOption(n int) Option
func LegacyOption(legacy bool) Option
func OnBlockDoneOption(handler func(size int)) Option
func SizeOption(size uint64) Option
func (*CompressingReader).Apply(options ...Option) (err error)
func (*Reader).Apply(options ...Option) (err error)
func (*Writer).Apply(options ...Option) (err error)
var DefaultBlockSizeOption
var DefaultChecksumOption
var DefaultConcurrency
Reader allows reading an LZ4 stream.
(*Reader) Apply(options ...Option) (err error)
(*Reader) Read(buf []byte) (n int, err error)
Reset clears the state of the Reader r such that it is equivalent to its
initial state from NewReader, but instead reading from reader.
No access to reader is performed.
Size returns the size of the underlying uncompressed data, if set in the stream.
WriteTo efficiently uncompresses the data from the Reader underlying source to w.
*Reader : github.com/gobwas/ws.HandshakeHeader
*Reader : github.com/gogo/protobuf/proto.Sizer
*Reader : io.Reader
*Reader : io.WriterTo
func NewReader(r io.Reader) *Reader
Writer allows writing an LZ4 stream.
(*Writer) Apply(options ...Option) (err error)
Close closes the Writer, flushing any unwritten data to the underlying writer
without closing it.
Flush any buffered data to the underlying writer immediately.
ReadFrom efficiently reads from r and compressed into the Writer destination.
Reset clears the state of the Writer w such that it is equivalent to its
initial state from NewWriter, but instead writing to writer.
Reset keeps the previous options unless overwritten by the supplied ones.
No access to writer is performed.
w.Close must be called before Reset or pending data may be dropped.
(*Writer) Write(buf []byte) (n int, err error)
*Writer : github.com/apache/thrift/lib/go/thrift.Flusher
*Writer : github.com/miekg/dns.Writer
*Writer : github.com/parquet-go/parquet-go/compress.Writer
*Writer : github.com/prometheus/common/expfmt.Closer
*Writer : internal/bisect.Writer
*Writer : io.Closer
*Writer : io.ReaderFrom
*Writer : io.WriteCloser
*Writer : io.Writer
func NewWriter(w io.Writer) *Writer
Package-Level Functions (total 17)
BlockChecksumOption enables or disables block checksum (default=false).
BlockSizeOption defines the maximum size of compressed blocks (default=Block4Mb).
ChecksumOption enables/disables all blocks or content checksum (default=true).
CompressBlock is equivalent to Compressor.CompressBlock.
The final argument is ignored and should be set to nil.
This function is deprecated. Use a Compressor instead.
CompressBlockBound returns the maximum size of a given buffer of size n, when not compressible.
CompressBlockHC is equivalent to CompressorHC.CompressBlock.
The final two arguments are ignored and should be set to nil.
This function is deprecated. Use a CompressorHC instead.
CompressionLevelOption defines the compression level (default=Fast).
ConcurrencyOption sets the number of go routines used for compression.
If n <= 0, then the output of runtime.GOMAXPROCS(0) is used.
LegacyOption provides support for writing LZ4 frames in the legacy format.
See https://github.com/lz4/lz4/blob/dev/doc/lz4_Frame_format.md#legacy-frame.
NB. compressed Linux kernel images use a tweaked LZ4 legacy format where
the compressed stream is followed by the original (uncompressed) size of
the kernel (https://events.static.linuxfound.org/sites/events/files/lcjpcojp13_klee.pdf).
This is also supported as a special case.
NewCompressingReader creates a reader which reads compressed data from
raw stream. This makes it a logical opposite of a normal lz4.Reader.
We require an io.ReadCloser as an underlying source for compatibility
with Go's http.Request.
NewReader returns a new LZ4 frame decoder.
NewWriter returns a new LZ4 frame encoder.
OnBlockDoneOption is triggered when a block has been processed. For a Writer, it is when is has been compressed,
for a Reader, it is when it has been uncompressed.
SizeOption sets the size of the original uncompressed data (default=0). It is useful to know the size of the
whole uncompressed data stream.
UncompressBlock uncompresses the source buffer into the destination one,
and returns the uncompressed size.
The destination buffer must be sized appropriately.
An error is returned if the source data is invalid or the destination buffer is too small.
UncompressBlockWithDict uncompresses the source buffer into the destination one using a
dictionary, and returns the uncompressed size.
The destination buffer must be sized appropriately.
An error is returned if the source data is invalid or the destination buffer is too small.
ValidFrameHeader returns a bool indicating if the given bytes slice matches a LZ4 header.
Package-Level Variables (total 3)
Default options.
Default options.
Default options.
Package-Level Constants (total 25)
const Block256Kb BlockSize = 262144
ErrInternalUnhandledState is an internal error.
ErrInvalidBlockChecksum is returned when reading a frame.
ErrInvalidFrame is returned when reading an invalid LZ4 archive.
ErrInvalidFrameChecksum is returned when reading a frame.
ErrInvalidHeaderChecksum is returned when reading a frame.
ErrInvalidSourceShortBuffer is returned by UncompressBlock or CompressBLock when a compressed
block is corrupted or the destination buffer is not large enough for the uncompressed data.
ErrOptionClosedOrError is returned when an option is applied to a closed or in error object.
ErrOptionInvalidBlockSize is returned when
ErrOptionInvalidCompressionLevel is returned when the supplied compression level is invalid.
ErrOptionNotApplicable is returned when trying to apply an option to an object not supporting it.
ErrWriterNotClosed is returned when attempting to reset an unclosed writer.
const Fast CompressionLevel = 0 const Level1 CompressionLevel = 512 const Level2 CompressionLevel = 1024 const Level3 CompressionLevel = 2048 const Level4 CompressionLevel = 4096 const Level5 CompressionLevel = 8192 const Level6 CompressionLevel = 16384 const Level7 CompressionLevel = 32768 const Level8 CompressionLevel = 65536 const Level9 CompressionLevel = 131072![]() |
The pages are generated with Golds v0.8.2. (GOOS=linux GOARCH=amd64) Golds is a Go 101 project developed by Tapir Liu. PR and bug reports are welcome and can be submitted to the issue list. Please follow @zigo_101 (reachable from the left QR code) to get the latest news of Golds. |