package ipc

Import Path
	github.com/apache/arrow-go/v18/arrow/ipc (on go.dev)

Dependency Relation
	imports 26 packages, and imported by 4 packages


Package-Level Type Names (total 12)
/* sort by: | */
FileReader is an Arrow file reader. Close cleans up resources used by the File. Close does not close the underlying reader. (*FileReader) NumDictionaries() int (*FileReader) NumRecords() int Read reads the current record batch from the underlying stream and an error, if any. When the Reader reaches the end of the underlying stream, it returns (nil, io.EOF). The returned record batch value is valid until the next call to Read. Users need to call Retain on that RecordBatch to keep it valid for longer. ReadAt reads the i-th record batch from the underlying stream and an error, if any. Record returns the i-th record from the file. The returned value is valid until the next call to Record. Users need to call Retain on that Record to keep it valid for longer. Deprecated: Use [RecordBatch] instead. RecordAt returns the i-th record from the file. Ownership is transferred to the caller and must call Release() to free the memory. This method is safe to call concurrently. Deprecated: Use [RecordBatchAt] instead. RecordBatch returns the i-th record batch from the file. The returned value is valid until the next call to RecordBatch. Users need to call Retain on that RecordBatch to keep it valid for longer. RecordBatchAt returns the i-th record batch from the file. Ownership is transferred to the caller and must call Release() to free the memory. This method is safe to call concurrently. (*FileReader) Schema() *arrow.Schema (*FileReader) Version() MetadataVersion *FileReader : github.com/apache/arrow-go/v18/arrow/arrio.Reader *FileReader : github.com/apache/arrow-go/v18/arrow/arrio.ReaderAt *FileReader : github.com/prometheus/common/expfmt.Closer *FileReader : io.Closer func NewFileReader(r ReadAtSeeker, opts ...Option) (*FileReader, error) func NewMappedFileReader(data []byte, opts ...Option) (*FileReader, error)
FileWriter is an Arrow file writer. (*FileWriter) Close() error (*FileWriter) Write(rec arrow.RecordBatch) error *FileWriter : github.com/apache/arrow-go/v18/arrow/arrio.Writer *FileWriter : github.com/prometheus/common/expfmt.Closer *FileWriter : io.Closer func NewFileWriter(w io.Writer, opts ...Option) (*FileWriter, error)
Message is an IPC message, including metadata and body. (*Message) BodyLen() int64 Release decreases the reference count by 1. Release may be called simultaneously from multiple goroutines. When the reference count goes to zero, the memory is freed. Retain increases the reference count by 1. Retain may be called simultaneously from multiple goroutines. (*Message) Type() MessageType (*Message) Version() MetadataVersion *Message : github.com/apache/arrow-go/v18/arrow/scalar.Releasable func NewMessage(meta, body *memory.Buffer) *Message func MessageReader.Message() (*Message, error)
( MessageReader) Message() (*Message, error) ( MessageReader) Release() ( MessageReader) Retain() MessageReader : github.com/apache/arrow-go/v18/arrow/scalar.Releasable func NewMessageReader(r io.Reader, opts ...Option) MessageReader func NewReaderFromMessageReader(r MessageReader, opts ...Option) (reader *Reader, err error)
MessageType represents the type of Message in an Arrow format. ( MessageType) String() string MessageType : expvar.Var MessageType : fmt.Stringer func (*Message).Type() MessageType const MessageDictionaryBatch const MessageNone const MessageRecordBatch const MessageSchema const MessageSparseTensor const MessageTensor
MetadataVersion represents the Arrow metadata version. ( MetadataVersion) String() string MetadataVersion : expvar.Var MetadataVersion : fmt.Stringer func (*FileReader).Version() MetadataVersion func (*Message).Version() MetadataVersion const MetadataV1 const MetadataV2 const MetadataV3 const MetadataV4 const MetadataV5
Option is a functional option to configure opening or creating Arrow files and streams. func WithAllocator(mem memory.Allocator) Option func WithCompressConcurrency(n int) Option func WithDelayReadSchema(v bool) Option func WithDictionaryDeltas(v bool) Option func WithEnsureNativeEndian(v bool) Option func WithFooterOffset(offset int64) Option func WithLZ4() Option func WithMinSpaceSavings(savings float64) Option func WithSchema(schema *arrow.Schema) Option func WithZstd() Option func GetRecordBatchPayload(batch arrow.RecordBatch, opts ...Option) (Payload, error) func NewFileReader(r ReadAtSeeker, opts ...Option) (*FileReader, error) func NewFileWriter(w io.Writer, opts ...Option) (*FileWriter, error) func NewMappedFileReader(data []byte, opts ...Option) (*FileReader, error) func NewMessageReader(r io.Reader, opts ...Option) MessageReader func NewReader(r io.Reader, opts ...Option) (*Reader, error) func NewReaderFromMessageReader(r MessageReader, opts ...Option) (reader *Reader, err error) func NewWriter(w io.Writer, opts ...Option) *Writer func NewWriterWithPayloadWriter(pw PayloadWriter, opts ...Option) *Writer
Payload is the underlying message object which is passed to the payload writer for actually writing out ipc messages Meta returns the buffer containing the metadata for this payload, callers must call Release on the buffer (*Payload) Release() SerializeBody serializes the body buffers and writes them to the provided writer. WritePayload serializes the payload in IPC format into the provided writer. func GetRecordBatchPayload(batch arrow.RecordBatch, opts ...Option) (Payload, error) func GetSchemaPayload(schema *arrow.Schema, mem memory.Allocator) Payload func PayloadWriter.WritePayload(Payload) error
PayloadWriter is an interface for injecting a different payloadwriter allowing more reusability with the Writer object with other scenarios, such as with Flight data ( PayloadWriter) Close() error ( PayloadWriter) Start() error ( PayloadWriter) WritePayload(Payload) error PayloadWriter : github.com/prometheus/common/expfmt.Closer PayloadWriter : io.Closer func NewWriterWithPayloadWriter(pw PayloadWriter, opts ...Option) *Writer
( ReadAtSeeker) Read(p []byte) (n int, err error) ( ReadAtSeeker) ReadAt(p []byte, off int64) (n int, err error) ( ReadAtSeeker) Seek(offset int64, whence int) (int64, error) github.com/apache/arrow-go/v18/internal/utils.Reader (interface) github.com/coreos/etcd/pkg/fileutil.LockedFile *github.com/klauspost/compress/s2.ReadSeeker *github.com/polarsignals/wal/fs.File *bytes.Reader *io.SectionReader mime/multipart.File (interface) *os.File *strings.Reader ReadAtSeeker : github.com/apache/arrow-go/v18/internal/utils.Reader ReadAtSeeker : github.com/apache/arrow-go/v18/parquet.ReaderAtSeeker ReadAtSeeker : io.Reader ReadAtSeeker : io.ReaderAt ReadAtSeeker : io.ReadSeeker ReadAtSeeker : io.Seeker func NewFileReader(r ReadAtSeeker, opts ...Option) (*FileReader, error)
Reader reads records from an io.Reader. Reader expects a schema (plus any dictionaries) as the first messages in the stream, followed by records. Err returns the last error encountered during the iteration over the underlying stream. Next returns whether a RecordBatch could be extracted from the underlying stream. Read reads the current record batch from the underlying stream and an error, if any. When the Reader reaches the end of the underlying stream, it returns (nil, io.EOF). Record returns the current record that has been extracted from the underlying stream. It is valid until the next call to Next. Deprecated: Use [RecordBatch] instead. RecordBatch returns the current record batch that has been extracted from the underlying stream. It is valid until the next call to Next. Release decreases the reference count by 1. When the reference count goes to zero, the memory is freed. Release may be called simultaneously from multiple goroutines. Retain increases the reference count by 1. Retain may be called simultaneously from multiple goroutines. (*Reader) Schema() *arrow.Schema *Reader : github.com/apache/arrow-go/v18/arrow/array.RecordReader *Reader : github.com/apache/arrow-go/v18/arrow/arrio.Reader *Reader : github.com/apache/arrow-go/v18/arrow/compute/exec.ArrayIter[bool] *Reader : github.com/apache/arrow-go/v18/arrow/scalar.Releasable func NewReader(r io.Reader, opts ...Option) (*Reader, error) func NewReaderFromMessageReader(r MessageReader, opts ...Option) (reader *Reader, err error)
Writer is an Arrow stream writer. (*Writer) Close() error (*Writer) Write(rec arrow.RecordBatch) (err error) *Writer : github.com/apache/arrow-go/v18/arrow/arrio.Writer *Writer : github.com/prometheus/common/expfmt.Closer *Writer : io.Closer func NewWriter(w io.Writer, opts ...Option) *Writer func NewWriterWithPayloadWriter(pw PayloadWriter, opts ...Option) *Writer
Package-Level Functions (total 21)
GetRecordBatchPayload produces the ipc payload for a given record batch. The resulting payload itself must be released by the caller via the Release method after it is no longer needed.
GetSchemaPayload produces the ipc payload for a given schema.
NewFileReader opens an Arrow file using the provided reader r.
NewFileWriter opens an Arrow file using the provided writer w.
NewMappedFileReader is like NewFileReader but instead of using a ReadAtSeeker, which will force copies through the Read/ReadAt methods, it uses a byte slice and pulls slices directly from the data. This is useful specifically when dealing with mmapped data so that you can lazily load the buffers and avoid extraneous copies. The slices used for the record column buffers will simply reference the existing data instead of performing copies via ReadAt/Read. For example, syscall.Mmap returns a byte slice which could be referencing a shared memory region or otherwise a memory-mapped file.
NewMessage creates a new message from the metadata and body buffers. NewMessage panics if any of these buffers is nil.
NewMessageReader returns a reader that reads messages from an input stream.
NewReader returns a reader that reads records from an input stream.
NewReaderFromMessageReader allows constructing a new reader object with the provided MessageReader allowing injection of reading messages other than by simple streaming bytes such as Arrow Flight which receives a protobuf message
NewWriter returns a writer that writes records to the provided output stream.
NewWriterWithPayloadWriter constructs a writer with the provided payload writer instead of the default stream payload writer. This makes the writer more reusable such as by the Arrow Flight writer.
WithAllocator specifies the Arrow memory allocator used while building records.
WithCompressConcurrency specifies a number of goroutines to spin up for concurrent compression of the body buffers when writing compress IPC records. If n <= 1 then compression will be done serially without goroutine parallelization. Default is 1.
WithDelayedReadSchema alters the ipc.Reader behavior to delay attempting to read the schema from the stream until the first call to Next instead of immediately attempting to read a schema from the stream when created.
WithDictionaryDeltas specifies whether or not to emit dictionary deltas.
WithEnsureNativeEndian specifies whether or not to automatically byte-swap buffers with endian-sensitive data if the schema's endianness is not the platform-native endianness. This includes all numeric types, temporal types, decimal types, as well as the offset buffers of variable-sized binary and list-like types. This is only relevant to ipc Reader objects, not to writers. This defaults to true.
WithFooterOffset specifies the Arrow footer position in bytes.
WithLZ4 tells the writer to use LZ4 Frame compression on the data buffers before writing. Requires >= Arrow 1.0.0 to read/decompress
WithMinSpaceSavings specifies a percentage of space savings for compression to be applied to buffers. Space savings is calculated as (1.0 - compressedSize / uncompressedSize). For example, if minSpaceSavings = 0.1, a 100-byte body buffer won't undergo compression if its expected compressed size exceeds 90 bytes. If this option is unset, compression will be used indiscriminately. If no codec was supplied, this option is ignored. Values outside of the range [0,1] are handled as errors. Note that enabling this option may result in unreadable data for Arrow Go and C++ versions prior to 12.0.0.
WithSchema specifies the Arrow schema to be used for reading or writing.
WithZstd tells the writer to use ZSTD compression on the data buffers before writing. Requires >= Arrow 1.0.0 to read/decompress
Package-Level Variables (only one)
Magic string identifying an Apache Arrow file.
Package-Level Constants (total 13)
const ExtensionMetadataKeyName = "ARROW:extension:metadata"
constants for the extension type metadata keys for the type name and any extension metadata to be passed to deserialize.
const MetadataV1 MetadataVersion = 0 // version for Arrow Format-0.1.0
const MetadataV2 MetadataVersion = 1 // version for Arrow Format-0.2.0
const MetadataV3 MetadataVersion = 2 // version for Arrow Format-0.3.0 to 0.7.1
const MetadataV4 MetadataVersion = 3 // version for >= Arrow Format-0.8.0
const MetadataV5 MetadataVersion = 4 // version for >= Arrow Format-1.0.0, backward compatible with v4