Overview
SQLite OpenTelemetry Collector is a lightweight, single-binary OpenTelemetry collector that stores telemetry data directly in an embedded SQLite database. It's designed for edge deployments, development environments, and situations where you need local telemetry storage without external dependencies.
Key Features
- Single Binary - No external dependencies, embedded SQLite database
- OTLP Support - Full support for traces, metrics, and logs via OTLP/HTTP
- Lightweight - Minimal resource footprint, suitable for edge devices
- Secure - Runs as non-root with systemd hardening
- Easy Deployment - Native packages, Docker images, and Kubernetes manifests
Installation
Binary Installation
# Download latest release
wget https://github.com/RedShiftVelocity/sqlite-otel/releases/latest/download/sqlite-otel-linux-amd64
chmod +x sqlite-otel-linux-amd64
sudo mv sqlite-otel-linux-amd64 /usr/local/bin/sqlite-otel
# Verify installation
sqlite-otel -version
Package Manager
# Ubuntu/Debian
wget https://github.com/RedShiftVelocity/sqlite-otel/releases/latest/download/sqlite-otel-collector_amd64.deb
sudo dpkg -i sqlite-otel-collector_amd64.deb
# RHEL/CentOS/Fedora
wget https://github.com/RedShiftVelocity/sqlite-otel/releases/latest/download/sqlite-otel-collector-amd64.rpm
sudo rpm -ivh sqlite-otel-collector-amd64.rpm
Docker
docker pull ghcr.io/redshiftvelocity/sqlite-otel:latest
Quick Start
Running the Collector
# Run with default settings
sqlite-otel
# Run with custom port and database
sqlite-otel -port 4318 -db-path ./my-data.db
# Run in Docker
docker run -d --name sqlite-otel -p 4318:4318 \
ghcr.io/redshiftvelocity/sqlite-otel:latest
Sending Test Data
# Send a test trace
curl -X POST http://localhost:4318/v1/traces \
-H "Content-Type: application/json" \
-d '{"resourceSpans": []}'
# Check collector status
curl http://localhost:4318/health
Basic Configuration
The collector can be configured using command-line flags. All configuration is optional - the collector works with sensible defaults.
Database Location
By default, the database is stored in:
- User mode:
~/.local/share/sqlite-otel/otel-collector.db
- Service mode:
/var/lib/sqlite-otel-collector/otel-collector.db
CLI Options
Flag | Description | Default |
---|---|---|
-port |
Port to listen on | 4318 (OTLP/HTTP standard) |
-db-path |
Path to SQLite database file | Auto-detected based on mode |
-log-file |
Path to log file | Auto-detected based on mode |
-log-max-size |
Maximum log file size in MB | 100 |
-log-max-backups |
Number of old log files to keep | 7 |
-version |
Show version information | - |
Service Mode
When running as a systemd service, the collector uses system paths and runs with enhanced security:
Default Service Paths
Component | Path | Description |
---|---|---|
Database | /var/lib/sqlite-otel-collector/otel-collector.db |
Persistent telemetry storage |
Logs | /var/log/sqlite-otel-collector.log |
Service logs with rotation |
Config | /etc/sqlite-otel-collector/ |
Configuration directory |
Service Management
# Start the service
sudo systemctl start sqlite-otel-collector
# Enable on boot
sudo systemctl enable sqlite-otel-collector
# Check status
sudo systemctl status sqlite-otel-collector
# View logs
sudo journalctl -u sqlite-otel-collector -f
# Restart service
sudo systemctl restart sqlite-otel-collector
Security Features
- Runs as dedicated
sqlite-otel
user (non-root) - Systemd hardening with namespace isolation
- Private /tmp and restricted system calls
- No new privileges after start
Sending Traces
Send distributed traces using the OTLP/HTTP protocol:
REST API Endpoint
POST http://localhost:4318/v1/traces
Content-Type: application/json
Example Trace
{
"resourceSpans": [{
"resource": {
"attributes": [{
"key": "service.name",
"value": { "stringValue": "my-service" }
}]
},
"scopeSpans": [{
"scope": {
"name": "my-instrumentation-library",
"version": "1.0.0"
},
"spans": [{
"traceId": "5b8efff798038103d269b633813fc60c",
"spanId": "eee19b7ec3c1b173",
"parentSpanId": "eee19b7ec3c1b174",
"name": "HTTP GET /api/users",
"startTimeUnixNano": "1544712660000000000",
"endTimeUnixNano": "1544712661000000000",
"kind": 2,
"attributes": [{
"key": "http.method",
"value": { "stringValue": "GET" }
}, {
"key": "http.url",
"value": { "stringValue": "/api/users" }
}]
}]
}]
}]
}
SDK Integration
# Python Example
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
# Configure the OTLP exporter
otlp_exporter = OTLPSpanExporter(
endpoint="http://localhost:4318/v1/traces",
)
# Set up the tracer
trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)
# Add the exporter to the span processor
span_processor = BatchSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(span_processor)
# Create spans
with tracer.start_as_current_span("my-operation"):
# Your code here
pass
Sending Metrics
Send metrics data using the OTLP/HTTP protocol:
REST API Endpoint
POST http://localhost:4318/v1/metrics
Content-Type: application/json
Example Metrics
{
"resourceMetrics": [{
"resource": {
"attributes": [{
"key": "service.name",
"value": { "stringValue": "my-service" }
}]
},
"scopeMetrics": [{
"scope": {
"name": "my-instrumentation-library",
"version": "1.0.0"
},
"metrics": [{
"name": "http_server_duration",
"description": "Duration of HTTP server requests",
"unit": "ms",
"histogram": {
"dataPoints": [{
"startTimeUnixNano": "1544712660000000000",
"timeUnixNano": "1544712661000000000",
"count": "100",
"sum": 5000.0,
"bucketCounts": ["10", "20", "30", "20", "10", "10"],
"explicitBounds": [0, 10, 25, 50, 100, 250]
}]
}
}]
}]
}]
}
Prometheus Integration
# Configure Prometheus to remote write to SQLite OTEL
remote_write:
- url: http://localhost:4318/v1/metrics
remote_timeout: 30s
Sending Logs
Send structured logs using the OTLP/HTTP protocol:
REST API Endpoint
POST http://localhost:4318/v1/logs
Content-Type: application/json
Example Logs
{
"resourceLogs": [{
"resource": {
"attributes": [{
"key": "service.name",
"value": { "stringValue": "my-service" }
}]
},
"scopeLogs": [{
"scope": {
"name": "my-instrumentation-library",
"version": "1.0.0"
},
"logRecords": [{
"timeUnixNano": "1544712660000000000",
"severityNumber": 9,
"severityText": "INFO",
"body": {
"stringValue": "User login successful"
},
"attributes": [{
"key": "user.id",
"value": { "stringValue": "user123" }
}, {
"key": "action",
"value": { "stringValue": "login" }
}]
}]
}]
}]
}
Fluentd Integration
# Fluentd configuration
<match **>
@type http
endpoint http://localhost:4318/v1/logs
content_type application/json
json_array true
</match>
Querying Data
Access stored telemetry data directly from the SQLite database:
Database Location
# User mode
~/.local/share/sqlite-otel/otel-collector.db
# Service mode
/var/lib/sqlite-otel-collector/otel-collector.db
Query Examples
# Connect to database
sqlite3 /var/lib/sqlite-otel-collector/otel-collector.db
# View recent traces
SELECT * FROM traces
ORDER BY timestamp DESC
LIMIT 10;
# Count traces by service
SELECT service_name, COUNT(*) as count
FROM traces
GROUP BY service_name;
# Find slow operations
SELECT operation_name, duration_ms
FROM traces
WHERE duration_ms > 1000
ORDER BY duration_ms DESC;
# Export data as JSON
.mode json
.output traces.json
SELECT * FROM traces;
.output stdout
Database Schema
The collector stores data in three main tables:
- traces - Distributed trace spans
- metrics - Time-series metrics data
- logs - Structured log entries
Visualization Tools
You can visualize the data using:
- SQLite browser applications
- Custom dashboards with SQL queries
- Export to JSON for external tools
Docker
Deploy the collector using Docker for easy containerized operations:
Quick Start
# Run with Docker
docker run -d \
--name sqlite-otel \
-p 4318:4318 \
-v sqlite-otel-data:/var/lib/sqlite-otel-collector \
-v sqlite-otel-logs:/var/log \
--restart unless-stopped \
redshiftvelocity/sqlite-otel:latest
Docker Compose
version: '3.8'
services:
sqlite-otel:
image: redshiftvelocity/sqlite-otel:latest
container_name: sqlite-otel
ports:
- "4318:4318"
volumes:
- sqlite-data:/var/lib/sqlite-otel-collector
- sqlite-logs:/var/log
environment:
- LOG_MAX_SIZE=50
- LOG_MAX_BACKUPS=5
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:4318/health"]
interval: 30s
timeout: 10s
retries: 3
volumes:
sqlite-data:
sqlite-logs:
Environment Variables
Variable | Description | Default |
---|---|---|
SQLITE_OTEL_PORT |
Port to listen on | 4318 |
LOG_MAX_SIZE |
Max log size in MB | 100 |
LOG_MAX_BACKUPS |
Number of log backups | 7 |
Kubernetes
Deploy the collector in Kubernetes clusters using various patterns:
DaemonSet Deployment
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: sqlite-otel-collector
namespace: observability
spec:
selector:
matchLabels:
app: sqlite-otel-collector
template:
metadata:
labels:
app: sqlite-otel-collector
spec:
containers:
- name: collector
image: redshiftvelocity/sqlite-otel:latest
ports:
- containerPort: 4318
protocol: TCP
volumeMounts:
- name: data
mountPath: /var/lib/sqlite-otel-collector
- name: logs
mountPath: /var/log
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: data
hostPath:
path: /var/lib/sqlite-otel
type: DirectoryOrCreate
- name: logs
hostPath:
path: /var/log/sqlite-otel
type: DirectoryOrCreate
Service Definition
apiVersion: v1
kind: Service
metadata:
name: sqlite-otel-collector
namespace: observability
spec:
selector:
app: sqlite-otel-collector
ports:
- name: otlp-http
port: 4318
targetPort: 4318
protocol: TCP
type: ClusterIP
Sidecar Pattern
Add the collector as a sidecar to your application pods:
containers:
- name: app
image: myapp:latest
# Your app configuration
- name: sqlite-otel
image: redshiftvelocity/sqlite-otel:latest
ports:
- containerPort: 4318
volumeMounts:
- name: telemetry-data
mountPath: /var/lib/sqlite-otel-collector
Systemd Service
The collector includes a hardened systemd service configuration for production deployments:
Service Configuration
# /etc/systemd/system/sqlite-otel-collector.service
[Unit]
Description=SQLite OpenTelemetry Collector
Documentation=https://github.com/RedShiftVelocity/sqlite-otel
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=sqlite-otel
Group=sqlite-otel
ExecStart=/usr/bin/sqlite-otel-collector
Restart=always
RestartSec=5
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/sqlite-otel-collector /var/log
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
RestrictRealtime=true
RestrictNamespaces=true
RestrictSUIDSGID=true
RemoveIPC=true
PrivateMounts=true
[Install]
WantedBy=multi-user.target
Managing the Service
# Enable and start
sudo systemctl enable --now sqlite-otel-collector
# Check status
sudo systemctl status sqlite-otel-collector
# View logs
sudo journalctl -u sqlite-otel-collector -f
# Restart
sudo systemctl restart sqlite-otel-collector
# Stop
sudo systemctl stop sqlite-otel-collector
Log Rotation
The service automatically rotates logs based on size and age:
- Maximum file size: 100MB (configurable)
- Keep 7 backup files (configurable)
- Compress old files to save space
- Delete files older than 30 days