package storage

Import Path
	github.com/polarsignals/frostdb/storage (on go.dev)

Dependency Relation
	imports 20 packages, and imported by one package

Involved Source Files bucket.go iceberg.go
Package-Level Type Names (total 5)
/* sort by: | */
Bucket is an objstore.Bucket that also supports reading files via a ReaderAt interface. Attributes returns information about the specified object. ( Bucket) Close() error Delete removes the object with the given name. If object does not exist in the moment of deletion, Delete should throw error. Exists checks if the given object exists in the bucket. Get returns a reader for the given object name. GetRange returns a new range reader for the given object name and range. ( Bucket) GetReaderAt(ctx context.Context, name string) (io.ReaderAt, error) IsAccessDeniedErr returns true if access to object is denied. IsObjNotFoundErr returns true if error means that object is not found. Relevant to Get operations. Iter calls f for each entry in the given directory (not recursive.). The argument to f is the full object name including the prefix of the inspected directory. Entries are passed to function in sorted order. Name returns the bucket name for the provider. Upload the contents of the reader as an object into the bucket. Upload should be idempotent. *BucketReaderAt github.com/polarsignals/frostdb.DefaultObjstoreBucket Bucket : github.com/polarsignals/frostdb/query/logicalplan.Named Bucket : github.com/prometheus/common/expfmt.Closer Bucket : github.com/thanos-io/objstore.Bucket Bucket : github.com/thanos-io/objstore.BucketReader Bucket : io.Closer func github.com/polarsignals/frostdb.NewDefaultBucket(b Bucket, options ...frostdb.DefaultObjstoreBucketOption) *frostdb.DefaultObjstoreBucket
BucketReaderAt implements the Bucket interface. Bucket objstore.Bucket Attributes returns information about the specified object. ( BucketReaderAt) Close() error Delete removes the object with the given name. If object does not exist in the moment of deletion, Delete should throw error. Exists checks if the given object exists in the bucket. Get returns a reader for the given object name. GetRange returns a new range reader for the given object name and range. GetReaderAt returns a io.ReaderAt for the given filename. IsAccessDeniedErr returns true if access to object is denied. IsObjNotFoundErr returns true if error means that object is not found. Relevant to Get operations. Iter calls f for each entry in the given directory (not recursive.). The argument to f is the full object name including the prefix of the inspected directory. Entries are passed to function in sorted order. Name returns the bucket name for the provider. Upload the contents of the reader as an object into the bucket. Upload should be idempotent. *BucketReaderAt : Bucket BucketReaderAt : github.com/polarsignals/frostdb/query/logicalplan.Named BucketReaderAt : github.com/prometheus/common/expfmt.Closer BucketReaderAt : github.com/thanos-io/objstore.Bucket BucketReaderAt : github.com/thanos-io/objstore.BucketReader BucketReaderAt : io.Closer func NewBucketReaderAt(bucket objstore.Bucket) *BucketReaderAt
FileReaderAt is a wrapper around a objstore.Bucket that implements the ReaderAt interface. Bucket objstore.Bucket Attributes returns information about the specified object. ( FileReaderAt) Close() error Delete removes the object with the given name. If object does not exist in the moment of deletion, Delete should throw error. Exists checks if the given object exists in the bucket. Get returns a reader for the given object name. GetRange returns a new range reader for the given object name and range. IsAccessDeniedErr returns true if access to object is denied. IsObjNotFoundErr returns true if error means that object is not found. Relevant to Get operations. Iter calls f for each entry in the given directory (not recursive.). The argument to f is the full object name including the prefix of the inspected directory. Entries are passed to function in sorted order. Name returns the bucket name for the provider. ReadAt implements the io.ReaderAt interface. Upload the contents of the reader as an object into the bucket. Upload should be idempotent. FileReaderAt : github.com/polarsignals/frostdb/query/logicalplan.Named *FileReaderAt : github.com/polarsignals/wal/types.ReadableFile FileReaderAt : github.com/prometheus/common/expfmt.Closer FileReaderAt : github.com/thanos-io/objstore.Bucket FileReaderAt : github.com/thanos-io/objstore.BucketReader FileReaderAt : io.Closer *FileReaderAt : io.ReaderAt
Iceberg is an Apache Iceberg backed DataSink/DataSource. (*Iceberg) Close() error (*Iceberg) Delete(_ context.Context, _ string) error (*Iceberg) Maintenance(ctx context.Context) error Prefixes lists all the tables found in the warehouse for the given database(prefix). Scan will load the latest Iceberg table. It will filter out any manifests that do not contain useful data. Then it will read the manifests that may contain useful data. It will then filter out the data file that dot not contain useful data. Finally it has a set of data files that may contain useful data. It will then read the data files and apply the filter to each row group in the data file. (*Iceberg) String() string Upload a parquet file into the Iceberg table. *Iceberg : github.com/polarsignals/frostdb.DataSink *Iceberg : github.com/polarsignals/frostdb.DataSinkSource *Iceberg : github.com/polarsignals/frostdb.DataSource *Iceberg : github.com/prometheus/common/expfmt.Closer *Iceberg : expvar.Var *Iceberg : fmt.Stringer *Iceberg : io.Closer func NewIceberg(uri string, ctlg catalog.Catalog, bucket objstore.Bucket, options ...IcebergOption) (*Iceberg, error)
IcebergOption is a function that configures an Iceberg DataSink/DataSource. func WithDataFileExpiry(maxAge time.Duration) IcebergOption func WithIcebergPartitionSpec(spec iceberg.PartitionSpec) IcebergOption func WithLogger(l log.Logger) IcebergOption func WithMaintenanceSchedule(schedule time.Duration) IcebergOption func NewIceberg(uri string, ctlg catalog.Catalog, bucket objstore.Bucket, options ...IcebergOption) (*Iceberg, error)
Package-Level Functions (total 6)
NewBucketReaderAt returns a new Bucket.
NewIceberg creates a new Iceberg DataSink/DataSource. You must provide the URI of the warehouse and the objstore.Bucket that points to that warehouse.
WithDataFileExpiry will a maxiumum age for data files. Data files older than the max age will be deleted from the table periodically according to the maintenance schedule.
WithIcebergPartitionSpec sets the partition spec for the Iceberg table. This is useful for pruning manifests during scans. note that at this time the Iceberg storage engine does not write data in a partition fashion. So this is only useful for setting the upper/lower bounds of columns in the manifest data.
WithMaintenanceSchedule sets the schedule for the maintenance of the Iceberg table. This will spawn a goroutine that will periodically expire data files if WithDataFileExpiry is set. And will delete orphanded files from the table.
Package-Level Constants (only one)
const DefaultOrphanedFileAge time.Duration = 86400000000000