Skip to content

StatsD receiver not releasing unix socket #44866

@brandocomando

Description

@brandocomando

Component(s)

receiver/statsd

What happened?

Description

When deploying the otel control collector with the statsd reciever as a daemonset in k8s, and mounting a host volume for exposing a unix socket to other pods on the host, occasionally when the daemonset is restarted the new pod fails to listen on the unix socket saying its already being used.

Steps to Reproduce

Deploy to k8s:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: otel-collector-test
  namespace: otel
spec:
  selector:
    matchLabels:
      app: otel-collector-test
  template:
    metadata:
      labels:
        app: otel-collector-test
    spec:
      containers:
        - args:
            - '--config=/conf/relay.yaml'
          image: otel/opentelemetry-collector-contrib:0.138.0
          name: gateway
          volumeMounts:
            - mountPath: /conf
              name: gateway-configmap
            - mountPath: /var/run/otel
              name: otel-sockets
      volumes:
        - configMap:
            defaultMode: 420
            items:
              - key: relay
                path: relay.yaml
            name: otel-collector-gateway
          name: gateway-configmap
        - hostPath:
            path: /var/run/otel
            type: DirectoryOrCreate
          name: otel-sockets
---
apiVersion: v1
data:
  relay: |
    receivers:
      nop:      
      statsd:
        transport: unixgram
        endpoint: /var/run/otel/statsd.sock
        aggregation_interval: 30s
        enable_metric_type: true
        enable_ip_only_aggregation: true
        enable_simple_tags: true
        is_monotonic_counter: true
        timer_histogram_mapping:
          - statsd_type: "timing"
            observer_type: "histogram"
            histogram:
              max_size: 50
          - statsd_type: "histogram"
            observer_type: "histogram"
            histogram:
              max_size: 50
          - statsd_type: "distribution"
            observer_type: "histogram"
            histogram:
              max_size: 50

    exporters:
      debug/nop:
        verbosity: detailed
        use_internal_logger: true
        sampling_initial: 0
        sampling_thereafter: 0

    service:
      pipelines:
        traces/nop:
          receivers: [nop]
          processors: []
          exporters: [debug/nop]
      
        metrics/nop:
          receivers: [nop]
          processors: []
          exporters: [debug/nop]
kind: ConfigMap
metadata:
  name: otel-collector-gateway
  namespace: otel

restart the daemonset a few times

Expected Result

The container comes up every time

Actual Result

Occasionally get this error:

2025-12-09T23:03:08.058Z	error	graph/graph.go:439	Failed to start component	{"resource": {"service.instance.id": "619ab3ef-1669-4d9d-b823-9816c4f48da9", "service.name": "otelcol-contrib", "service.version": "0.138.0"}, "error": "starting to listen unixgram socket: listen unixgram /var/run/otel/statsd.sock: bind: address already in use", "type": "Receiver", "id": "statsd"}
2025-12-09T23:03:08.058Z	info	healthcheck/handler.go:131	Health Check state change	{"resource": {"service.instance.id": "619ab3ef-1669-4d9d-b823-9816c4f48da9", "service.name": "otelcol-contrib", "service.version": "0.138.0"}, "otelcol.component.id": "health_check", "otelcol.component.kind": "extension", "status": "unavailable"}
2025-12-09T23:03:08.059Z	info	agentcomponents/zaplogger.go:39	Exiting concentrator, computing remaining stats	{"resource": {"service.instance.id": "619ab3ef-1669-4d9d-b823-9816c4f48da9", "service.name": "otelcol-contrib", "service.version": "0.138.0"}, "otelcol.component.id": "datadog", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"}
2025-12-09T23:03:08.060Z	info	agentcomponents/zaplogger.go:39	Exiting concentrator, computing remaining stats	{"resource": {"service.instance.id": "619ab3ef-1669-4d9d-b823-9816c4f48da9", "service.name": "otelcol-contrib", "service.version": "0.138.0"}, "otelcol.component.id": "datadog", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"}
Error: cannot start pipelines: failed to start "statsd" receiver: starting to listen unixgram socket: listen unixgram /var/run/otel/statsd.sock: bind: address already in use
2025/12/09 23:03:08 collector server run finished with error: cannot start pipelines: failed to start "statsd" receiver: starting to listen unixgram socket: listen unixgram /var/run/otel/statsd.sock: bind: address already in use

Collector version

0.138.0

Environment information

Environment

AWS EKS Cluster

OpenTelemetry Collector configuration

receivers:
  nop:      
  statsd:
    transport: unixgram
    endpoint: /var/run/otel/statsd.sock
    aggregation_interval: 30s
    enable_metric_type: true
    enable_ip_only_aggregation: true
    enable_simple_tags: true
    is_monotonic_counter: true
    timer_histogram_mapping:
      - statsd_type: "timing"
        observer_type: "histogram"
        histogram:
          max_size: 50
      - statsd_type: "histogram"
        observer_type: "histogram"
        histogram:
          max_size: 50
      - statsd_type: "distribution"
        observer_type: "histogram"
        histogram:
          max_size: 50

exporters:
  debug/nop:
    verbosity: detailed
    use_internal_logger: true
    sampling_initial: 0
    sampling_thereafter: 0

service:
  pipelines:
    traces/nop:
      receivers: [nop]
      processors: []
      exporters: [debug/nop]
  
    metrics/nop:
      receivers: [nop]
      processors: []
      exporters: [debug/nop]

Log output

2025-12-09T23:03:08.058Z	error	graph/graph.go:439	Failed to start component	{"resource": {"service.instance.id": "619ab3ef-1669-4d9d-b823-9816c4f48da9", "service.name": "otelcol-contrib", "service.version": "0.138.0"}, "error": "starting to listen unixgram socket: listen unixgram /var/run/otel/statsd.sock: bind: address already in use", "type": "Receiver", "id": "statsd"}
2025-12-09T23:03:08.058Z	info	healthcheck/handler.go:131	Health Check state change	{"resource": {"service.instance.id": "619ab3ef-1669-4d9d-b823-9816c4f48da9", "service.name": "otelcol-contrib", "service.version": "0.138.0"}, "otelcol.component.id": "health_check", "otelcol.component.kind": "extension", "status": "unavailable"}
2025-12-09T23:03:08.059Z	info	agentcomponents/zaplogger.go:39	Exiting concentrator, computing remaining stats	{"resource": {"service.instance.id": "619ab3ef-1669-4d9d-b823-9816c4f48da9", "service.name": "otelcol-contrib", "service.version": "0.138.0"}, "otelcol.component.id": "datadog", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"}
2025-12-09T23:03:08.060Z	info	agentcomponents/zaplogger.go:39	Exiting concentrator, computing remaining stats	{"resource": {"service.instance.id": "619ab3ef-1669-4d9d-b823-9816c4f48da9", "service.name": "otelcol-contrib", "service.version": "0.138.0"}, "otelcol.component.id": "datadog", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"}
Error: cannot start pipelines: failed to start "statsd" receiver: starting to listen unixgram socket: listen unixgram /var/run/otel/statsd.sock: bind: address already in use
2025/12/09 23:03:08 collector server run finished with error: cannot start pipelines: failed to start "statsd" receiver: starting to listen unixgram socket: listen unixgram /var/run/otel/statsd.sock: bind: address already in use

Additional context

No response

Tip

React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingreceiver/statsdstatsd related issues

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions