package log
Import Path
go.opentelemetry.io/otel/sdk/log (on go.dev)
Dependency Relation
imports 24 packages, and imported by one package
Involved Source Files
batch.go
Package log provides the OpenTelemetry Logs SDK.
See https://opentelemetry.io/docs/concepts/signals/logs/ for information
about the concept of OpenTelemetry Logs and
https://opentelemetry.io/docs/concepts/components/ for more information
about OpenTelemetry SDKs.
The entry point for the log package is [NewLoggerProvider].
[LoggerProvider] is the object that all Bridge API calls use to create
Loggers, and ultimately emit log records.
Also, it is an object that should be used to
control the life-cycle (start, flush, and shutdown) of the Logs SDK.
A LoggerProvider needs to be configured to process the log records, this is
done by configuring it with a [Processor] implementation using [WithProcessor].
The log package provides the [BatchProcessor] and [SimpleProcessor]
that are configured with an [Exporter] implementation which
exports the log records to given destination. See
[go.opentelemetry.io/otel/exporters] for exporters that can be used with these
Processors.
The data generated by a LoggerProvider needs to include information about its
origin. A LoggerProvider needs to be configured with a Resource, by using
[WithResource], to include this information. This Resource
should be used to describe the unique runtime environment instrumented code
is being run on. That way when multiple instances of the code are collected
at a single endpoint their origin is decipherable.
See [go.opentelemetry.io/otel/sdk/log/internal/x] for information about
the experimental features.
See [go.opentelemetry.io/otel/log] for more information about
the OpenTelemetry Logs API.
exporter.go
filter_processor.go
logger.go
processor.go
provider.go
record.go
ring.go
setting.go
simple.go
Code Examples
package main
import (
"context"
"fmt"
"go.opentelemetry.io/otel/log/global"
"go.opentelemetry.io/otel/sdk/log"
)
func main() {
// Create an exporter that will emit log records.
// E.g. use go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp
// to send logs using OTLP over HTTP:
// exporter, err := otlploghttp.New(ctx)
var exporter log.Exporter
// Create a log record processor pipeline.
processor := log.NewBatchProcessor(exporter)
// Create a logger provider.
// You can pass this instance directly when creating a log bridge.
provider := log.NewLoggerProvider(
log.WithProcessor(processor),
)
// Handle shutdown properly so that nothing leaks.
defer func() {
err := provider.Shutdown(context.Background())
if err != nil {
fmt.Println(err)
}
}()
// Register as global logger provider so that it can be used via global.Meter
// and accessed using global.GetMeterProvider.
// Most log bridges use the global logger provider as default.
// If the global logger provider is not set then a no-op implementation
// is used, which fails to generate data.
global.SetLoggerProvider(provider)
// Use a bridge so that you can emit logs using your Go logging library of preference.
// E.g. use go.opentelemetry.io/contrib/bridges/otelslog so that you can use log/slog:
// slog.SetDefault(otelslog.NewLogger("my/pkg/name", otelslog.WithLoggerProvider(provider)))
}
package main
import (
"context"
"sync"
"go.opentelemetry.io/otel/sdk/log"
)
func main() {
// Existing processor that emits telemetry.
var processor log.Processor = log.NewBatchProcessor(nil)
// Wrap the processor so that it ignores processing log records
// when a context deriving from WithIgnoreLogs is passed
// to the logging methods.
processor = &ContextFilterProcessor{Processor: processor}
// The created processor can then be registered with
// the OpenTelemetry Logs SDK using the WithProcessor option.
_ = log.NewLoggerProvider(
log.WithProcessor(processor),
)
}
type key struct{}
var ignoreLogsKey key
// ContextFilterProcessor filters out logs when a context deriving from
// [WithIgnoreLogs] is passed to its methods.
type ContextFilterProcessor struct {
log.Processor
lazyFilter sync.Once
filter log.FilterProcessor
}
func (p *ContextFilterProcessor) OnEmit(ctx context.Context, record *log.Record) error {
if ignoreLogs(ctx) {
return nil
}
return p.Processor.OnEmit(ctx, record)
}
func (p *ContextFilterProcessor) Enabled(ctx context.Context, param log.EnabledParameters) bool {
p.lazyFilter.Do(func() {
if f, ok := p.Processor.(log.FilterProcessor); ok {
p.filter = f
}
})
return !ignoreLogs(ctx) && (p.filter == nil || p.filter.Enabled(ctx, param))
}
func ignoreLogs(ctx context.Context) bool {
_, ok := ctx.Value(ignoreLogsKey).(bool)
return ok
}
package main
import (
"context"
logapi "go.opentelemetry.io/otel/log"
"go.opentelemetry.io/otel/sdk/log"
)
func main() {
// Existing processor that emits telemetry.
var processor log.Processor = log.NewBatchProcessor(nil)
// Add a processor so that it sets EventName on log records.
eventNameProcessor := &EventNameProcessor{}
// The created processor can then be registered with
// the OpenTelemetry Logs SDK using the WithProcessor option.
_ = log.NewLoggerProvider(
// Order is important here. Set EventName before handing to the processor.
log.WithProcessor(eventNameProcessor),
log.WithProcessor(processor),
)
}
// EventNameProcessor is a [log.Processor] that sets the EventName
// on log records having "event.name" string attribute.
// It is useful for logging libraries that do not support
// setting the event name on log records,
// but do support attributes.
type EventNameProcessor struct{}
// OnEmit sets the EventName on log records having an "event.name" string attribute.
// The original attribute is not removed.
func (*EventNameProcessor) OnEmit(_ context.Context, record *log.Record) error {
record.WalkAttributes(func(kv logapi.KeyValue) bool {
if kv.Key == "event.name" && kv.Value.Kind() == logapi.KindString {
record.SetEventName(kv.Value.AsString())
}
return true
})
return nil
}
// Shutdown returns nil.
func (*EventNameProcessor) Shutdown(context.Context) error {
return nil
}
// ForceFlush returns nil.
func (*EventNameProcessor) ForceFlush(context.Context) error {
return nil
}
package main
import (
"context"
"strings"
logapi "go.opentelemetry.io/otel/log"
"go.opentelemetry.io/otel/sdk/log"
)
func main() {
// Existing processor that emits telemetry.
var processor log.Processor = log.NewBatchProcessor(nil)
// Add a processor so that it redacts values from token attributes.
redactProcessor := &RedactTokensProcessor{}
// The created processor can then be registered with
// the OpenTelemetry Logs SDK using the WithProcessor option.
_ = log.NewLoggerProvider(
// Order is important here. Redact before handing to the processor.
log.WithProcessor(redactProcessor),
log.WithProcessor(processor),
)
}
// RedactTokensProcessor is a [log.Processor] decorator that redacts values
// from attributes containing "token" in the key.
type RedactTokensProcessor struct{}
// OnEmit redacts values from attributes containing "token" in the key
// by replacing them with a REDACTED value.
func (*RedactTokensProcessor) OnEmit(_ context.Context, record *log.Record) error {
record.WalkAttributes(func(kv logapi.KeyValue) bool {
if strings.Contains(strings.ToLower(kv.Key), "token") {
record.AddAttributes(logapi.String(kv.Key, "REDACTED"))
}
return true
})
return nil
}
// Shutdown returns nil.
func (*RedactTokensProcessor) Shutdown(context.Context) error {
return nil
}
// ForceFlush returns nil.
func (*RedactTokensProcessor) ForceFlush(context.Context) error {
return nil
}
Package-Level Type Names (total 11)
BatchProcessor is a processor that exports batches of log records.
Use [NewBatchProcessor] to create a BatchProcessor. An empty BatchProcessor
is shut down by default, no records will be batched or exported.
ForceFlush flushes queued log records and flushes the decorated exporter.
OnEmit batches provided log record.
Shutdown flushes queued log records and shuts down the decorated exporter.
*BatchProcessor : Processor
func NewBatchProcessor(exporter Exporter, opts ...BatchProcessorOption) *BatchProcessor
BatchProcessorOption applies a configuration to a [BatchProcessor].
func WithExportBufferSize(size int) BatchProcessorOption
func WithExportInterval(d time.Duration) BatchProcessorOption
func WithExportMaxBatchSize(size int) BatchProcessorOption
func WithExportTimeout(d time.Duration) BatchProcessorOption
func WithMaxQueueSize(size int) BatchProcessorOption
func NewBatchProcessor(exporter Exporter, opts ...BatchProcessorOption) *BatchProcessor
EnabledParameters represents payload for [FilterProcessor]'s Enabled method.
EventName string
InstrumentationScope instrumentation.Scope
Severity log.Severity
func FilterProcessor.Enabled(ctx context.Context, param EnabledParameters) bool
Exporter handles the delivery of log records to external receivers.
Export transmits log records to a receiver.
The deadline or cancellation of the passed context must be honored. An
appropriate error should be returned in these situations.
All retry logic must be contained in this function. The SDK does not
implement any retry logic. All errors returned by this function are
considered unrecoverable and will be reported to a configured error
Handler.
Implementations must not retain the records slice.
Before modifying a Record, the implementation must use Record.Clone
to create a copy that shares no state with the original.
Export should never be called concurrently with other Export calls.
However, it may be called concurrently with other methods.
ForceFlush exports log records to the configured Exporter that have not yet
been exported.
The deadline or cancellation of the passed context must be honored. An
appropriate error should be returned in these situations.
ForceFlush may be called concurrently with itself or with other methods.
Shutdown is called when the SDK shuts down. Any cleanup or release of
resources held by the exporter should be done in this call.
The deadline or cancellation of the passed context must be honored. An
appropriate error should be returned in these situations.
After Shutdown is called, calls to Export, Shutdown, or ForceFlush
should perform no operation and return nil error.
Shutdown may be called concurrently with itself or with other methods.
func NewBatchProcessor(exporter Exporter, opts ...BatchProcessorOption) *BatchProcessor
func NewSimpleProcessor(exporter Exporter, _ ...SimpleProcessorOption) *SimpleProcessor
func github.com/pancsta/asyncmachine-go/pkg/telemetry.NewOtelLoggerProvider(exporter Exporter) *LoggerProvider
FilterProcessor is a [Processor] that knows, and can identify, what [Record]
it will process or drop when it is passed to [Processor.OnEmit].
This is useful for users that want to know if a [log.Record]
will be processed or dropped before they perform complex operations to
construct the [log.Record].
The SDK's Logger.Enabled returns false
if all the registered Processors implement FilterProcessor
and they all return false.
Processor implementations that choose to support this by satisfying this
interface are expected to re-evaluate the [Record] passed to [Processor.OnEmit],
it is not expected that the caller to OnEmit will use the functionality
from this interface prior to calling OnEmit.
See the [go.opentelemetry.io/contrib/processors/minsev] for an example use-case.
It provides a Processor used to filter out [Record]
that has a [log.Severity] below a threshold.
Enabled reports whether the Processor will process for the given context
and param.
The passed param is likely to be a partial record information being
provided (e.g a param with only the Severity set).
If a Processor needs more information than is provided, it
is said to be in an indeterminate state (see below).
The returned value will be true when the Processor will process for the
provided context and param, and will be false if the Logger will not
emit. The returned value may be true or false in an indeterminate state.
An implementation should default to returning true for an indeterminate
state, but may return false if valid reasons in particular circumstances
exist (e.g. performance, correctness).
The param should not be held by the implementation. A copy should be
made if the param needs to be held after the call returns.
Implementations of this method need to be safe for a user to call
concurrently.
LoggerProvider handles the creation and coordination of Loggers. All Loggers
created by a LoggerProvider will be associated with the same Resource.
LoggerProvider embedded.LoggerProvider
ForceFlush flushes all processors.
This method can be called concurrently.
Logger returns a new [log.Logger] with the provided name and configuration.
If p is shut down, a [noop.Logger] instance is returned.
This method can be called concurrently.
Shutdown shuts down the provider and all processors.
This method can be called concurrently.
*LoggerProvider : go.opentelemetry.io/otel/log.LoggerProvider
LoggerProvider : go.opentelemetry.io/otel/log/embedded.LoggerProvider
func NewLoggerProvider(opts ...LoggerProviderOption) *LoggerProvider
func github.com/pancsta/asyncmachine-go/pkg/telemetry.NewOtelLoggerProvider(exporter Exporter) *LoggerProvider
func github.com/pancsta/asyncmachine-go/pkg/telemetry.BindOtelLogger(mach am.Api, provider *LoggerProvider, service string)
LoggerProviderOption applies a configuration option value to a LoggerProvider.
func WithAllowKeyDuplication() LoggerProviderOption
func WithAttributeCountLimit(limit int) LoggerProviderOption
func WithAttributeValueLengthLimit(limit int) LoggerProviderOption
func WithProcessor(processor Processor) LoggerProviderOption
func WithResource(res *resource.Resource) LoggerProviderOption
func NewLoggerProvider(opts ...LoggerProviderOption) *LoggerProvider
Processor handles the processing of log records.
Any of the Processor's methods may be called concurrently with itself
or with other methods. It is the responsibility of the Processor to manage
this concurrency.
See [FilterProcessor] for information about how a Processor can support filtering.
ForceFlush exports log records to the configured Exporter that have not yet
been exported.
The deadline or cancellation of the passed context must be honored. An
appropriate error should be returned in these situations.
OnEmit is called when a Record is emitted.
OnEmit will be called independent of Enabled. Implementations need to
validate the arguments themselves before processing.
Implementation should not interrupt the record processing
if the context is canceled.
All retry logic must be contained in this function. The SDK does not
implement any retry logic. All errors returned by this function are
considered unrecoverable and will be reported to a configured error
Handler.
The SDK invokes the processors sequentially in the same order as
they were registered using WithProcessor.
Implementations may synchronously modify the record so that the changes
are visible in the next registered processor.
Notice that Record is not concurrent safe. Therefore, asynchronous
processing may cause race conditions. Use Record.Clone
to create a copy that shares no state with the original.
Shutdown is called when the SDK shuts down. Any cleanup or release of
resources held by the exporter should be done in this call.
The deadline or cancellation of the passed context must be honored. An
appropriate error should be returned in these situations.
After Shutdown is called, calls to Export, Shutdown, or ForceFlush
should perform no operation and return nil error.
*BatchProcessor
*SimpleProcessor
func WithProcessor(processor Processor) LoggerProviderOption
Record is a log record emitted by the Logger.
A log record with non-empty event name is interpreted as an event record.
Do not create instances of Record on your own in production code.
You can use [go.opentelemetry.io/otel/sdk/log/logtest.RecordFactory]
for testing purposes.
AddAttributes adds attributes to the log record.
Attributes in attrs will overwrite any attribute already added to r with the same key.
AttributesLen returns the number of attributes in the log record.
Body returns the body of the log record.
Clone returns a copy of the record with no shared state. The original record
and the clone can both be modified without interfering with each other.
DroppedAttributes returns the number of attributes dropped due to limits
being reached.
EventName returns the event name.
A log record with non-empty event name is interpreted as an event record.
InstrumentationScope returns the scope that the Logger was created with.
ObservedTimestamp returns the time when the log record was observed.
Resource returns the entity that collected the log.
SetAttributes sets (and overrides) attributes to the log record.
SetBody sets the body of the log record.
SetEventName sets the event name.
A log record with non-empty event name is interpreted as an event record.
SetObservedTimestamp sets the time when the log record was observed.
SetSeverity sets the severity level of the log record.
SetSeverityText sets severity (also known as log level) text. This is the
original string representation of the severity as it is known at the source.
SetSpanID sets the span ID.
SetTimestamp sets the time when the log record occurred.
SetTraceFlags sets the trace flags.
SetTraceID sets the trace ID.
Severity returns the severity of the log record.
SeverityText returns severity (also known as log level) text. This is the
original string representation of the severity as it is known at the source.
SpanID returns the span ID or empty array.
Timestamp returns the time when the log record occurred.
TraceFlags returns the trace flags.
TraceID returns the trace ID or empty array.
WalkAttributes walks all attributes the log record holds by calling f for
each on each [log.KeyValue] in the [Record]. Iteration stops if f returns false.
func (*Record).Clone() Record
func (*BatchProcessor).OnEmit(_ context.Context, r *Record) error
func Exporter.Export(ctx context.Context, records []Record) error
func Processor.OnEmit(ctx context.Context, record *Record) error
func (*SimpleProcessor).OnEmit(ctx context.Context, r *Record) error
SimpleProcessor is an processor that synchronously exports log records.
Use [NewSimpleProcessor] to create a SimpleProcessor.
ForceFlush flushes the exporter.
OnEmit batches provided log record.
Shutdown shuts down the exporter.
*SimpleProcessor : Processor
func NewSimpleProcessor(exporter Exporter, _ ...SimpleProcessorOption) *SimpleProcessor
SimpleProcessorOption applies a configuration to a [SimpleProcessor].
func NewSimpleProcessor(exporter Exporter, _ ...SimpleProcessorOption) *SimpleProcessor
Package-Level Functions (total 13)
NewBatchProcessor decorates the provided exporter
so that the log records are batched before exporting.
All of the exporter's methods are called synchronously.
NewLoggerProvider returns a new and configured LoggerProvider.
By default, the returned LoggerProvider is configured with the default
Resource and no Processors. Processors cannot be added after a LoggerProvider is
created. This means the returned LoggerProvider, one created with no
Processors, will perform no operations.
NewSimpleProcessor is a simple Processor adapter.
This Processor is not recommended for production use due to its synchronous
nature, which makes it suitable for testing, debugging, or demonstrating
other features, but can lead to slow performance and high computational
overhead. For production environments, it is recommended to use
[NewBatchProcessor] instead. However, there may be exceptions where certain
[Exporter] implementations perform better with this Processor.
WithAllowKeyDuplication sets whether deduplication is skipped for log attributes or other key-value collections.
By default, the key-value collections within a log record are deduplicated to comply with the OpenTelemetry Specification.
Deduplication means that if multiple key–value pairs with the same key are present, only a single pair
is retained and others are discarded.
Disabling deduplication with this option can improve performance e.g. of adding attributes to the log record.
Note that if you disable deduplication, you are responsible for ensuring that duplicate
key-value pairs within in a single collection are not emitted,
or that the telemetry receiver can handle such duplicates.
WithAttributeCountLimit sets the maximum allowed log record attribute count.
Any attribute added to a log record once this limit is reached will be dropped.
Setting this to zero means no attributes will be recorded.
Setting this to a negative value means no limit is applied.
If the OTEL_LOGRECORD_ATTRIBUTE_COUNT_LIMIT environment variable is set,
and this option is not passed, that variable value will be used.
By default, if an environment variable is not set, and this option is not
passed, 128 will be used.
WithAttributeValueLengthLimit sets the maximum allowed attribute value length.
This limit only applies to string and string slice attribute values.
Any string longer than this value will be truncated to this length.
Setting this to a negative value means no limit is applied.
If the OTEL_LOGRECORD_ATTRIBUTE_VALUE_LENGTH_LIMIT environment variable is set,
and this option is not passed, that variable value will be used.
By default, if an environment variable is not set, and this option is not
passed, no limit (-1) will be used.
WithExportBufferSize sets the batch buffer size.
Batches will be temporarily kept in a memory buffer until they are exported.
By default, a value of 1 will be used.
The default value is also used when the provided value is less than one.
WithExportInterval sets the maximum duration between batched exports.
If the OTEL_BLRP_SCHEDULE_DELAY environment variable is set,
and this option is not passed, that variable value will be used.
By default, if an environment variable is not set, and this option is not
passed, 1s will be used.
The default value is also used when the provided value is less than one.
WithExportMaxBatchSize sets the maximum batch size of every export.
A batch will be split into multiple exports to not exceed this size.
If the OTEL_BLRP_MAX_EXPORT_BATCH_SIZE environment variable is set,
and this option is not passed, that variable value will be used.
By default, if an environment variable is not set, and this option is not
passed, 512 will be used.
The default value is also used when the provided value is less than one.
WithExportTimeout sets the duration after which a batched export is canceled.
If the OTEL_BLRP_EXPORT_TIMEOUT environment variable is set,
and this option is not passed, that variable value will be used.
By default, if an environment variable is not set, and this option is not
passed, 30s will be used.
The default value is also used when the provided value is less than one.
WithMaxQueueSize sets the maximum queue size used by the Batcher.
After the size is reached log records are dropped.
If the OTEL_BLRP_MAX_QUEUE_SIZE environment variable is set,
and this option is not passed, that variable value will be used.
By default, if an environment variable is not set, and this option is not
passed, 2048 will be used.
The default value is also used when the provided value is less than one.
WithProcessor associates Processor with a LoggerProvider.
By default, if this option is not used, the LoggerProvider will perform no
operations; no data will be exported without a processor.
The SDK invokes the processors sequentially in the same order as they were
registered.
For production, use [NewBatchProcessor] to batch log records before they are exported.
For testing and debugging, use [NewSimpleProcessor] to synchronously export log records.
See [FilterProcessor] for information about how a Processor can support filtering.
WithResource associates a Resource with a LoggerProvider. This Resource
represents the entity producing telemetry and is associated with all Loggers
the LoggerProvider will create.
By default, if this Option is not used, the default Resource from the
go.opentelemetry.io/otel/sdk/resource package will be used.
![]() |
The pages are generated with Golds v0.8.2. (GOOS=linux GOARCH=amd64) Golds is a Go 101 project developed by Tapir Liu. PR and bug reports are welcome and can be submitted to the issue list. Please follow @zigo_101 (reachable from the left QR code) to get the latest news of Golds. |