Logging¶
Configuration¶
OnSphere modules allow their log level to be defined in their module’s configuration :
"loggingConfiguration": {
"moduleLogLevel": "INFO",
"scriptLogLevel": "DEBUG",
"externalLogLevel": "WARN"
}
moduleLogLevel : configures logs produced by the module itself.
scriptLogLevel : configures logs produced by scripts run inside the module.
externalLogLevel : configures logs produced by external dependencies used by the module.
The following log level are supported :
TRACE
DEBUG
INFO
WARN
ERROR
Note
In modules written in C++ (osp-snmp-trap, osp-modbus), the externalLogLevel field is ignored since most dependencies do not allow much control on their logging behavior.
Services logs¶
All OnSphere services are generating logs in stdout like docker recommend. This means, it’s possible to use any docker collector available for a swarm.
Service log configuration¶
Configuration of services is done by appending the service.default-logger
file to each service. This means the size is BY SERVICE. If a service needs a specific logger, use the ${{disable-generic-logger-conf}}
keyword.
Note
The external service doesn’t use default configuration from service-default-logger
Default logging stack¶
We provide by default an ELK logging stack composed of :
Filebeat : for aggregating logs from all containers and enrich them with docker metadata
Logstash : for filtering and/or additional information extraction
Elasticsearch : for indexing logs based on all extracted information from previous steps
Kibana : for visualizing and filtering logs
Stack service configuration¶
The default stack services configuration is provided in the stack.external-services
file as follows :
elasticsearch:
image: "docker.elastic.co/elasticsearch/elasticsearch:7.6.0"
environment:
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- "discovery.type=single-node"
ports:
- "9200:9200"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
logging:
driver: none
kibana:
image: "docker.elastic.co/kibana/kibana:7.6.0"
ports:
- "5601:5601"
logging:
driver: none
filebeat:
image: "docker.elastic.co/beats/filebeat:7.6.0"
user: root
volumes:
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock
configs:
- source: filebeat.yml
target: /usr/share/filebeat/filebeat.yml
mode: 0644
logging:
driver: none
logstash:
image: "docker.elastic.co/logstash/logstash:7.6.0"
command: "logstash -f /usr/share/logstash/logstash.conf"
configs:
- source: logstash.conf
target: /usr/share/logstash/logstash.conf
mode: 444
logging:
driver: none
A volume for Elasticsearch is also defined in the stack.volumes
file :
volumes:
# other volumes ...
elasticsearch_data:
Configuration files for filebeat and logstash are provided by Docker configs. They are declared in the stack.configs file :
configs:
# other configs ...
filebeat.yml:
external: true
logstash.conf:
external: true
Configuration files are created during deployment with the following default configuration :
filebeat.yml :
filebeat.inputs:
- type: container
paths:
- '/var/lib/docker/containers/*/*.log'
multiline.pattern: '^\s+'
multiline.negate: false
multiline.match: after
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
- decode_json_fields:
fields: ["message"]
target: "json"
overwrite_keys: true
output.logstash:
hosts: ["logstash:5044"]
logging.json: true
logging.metrics.enabled: false
logstash.conf :
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{GREEDYDATA:sdn_log_time} \[%{GREEDYDATA:sdn_log_thread}\] %{LOGLEVEL:sdn_log_level} \s*%{GREEDYDATA:sdn_log_message}" }
add_field => { "sdn_log_type" => "JAVA" }
}
grok {
match => { "message" => "\[%{TIMESTAMP_ISO8601:sdn_log_datetime}\] \[%{LOGLEVEL:sdn_log_level}\] %{GREEDYDATA:sdn_log_message}" }
add_field => { "sdn_log_type" => "C++" }
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:sdn_log_datetime} %{WORD:sdn_log_level} \s*%{GREEDYDATA:sdn_log_message}" }
add_field => { "sdn_log_type" => "MONGO" }
}
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:sdn_log_datetime} \[%{LOGLEVEL:sdn_log_level}\] %{GREEDYDATA:sdn_log_message}" }
add_field => { "sdn_log_type" => "RABBIT" }
}
}
output {
elasticsearch {
hosts => elasticsearch manage_template => false index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
The default ELK logging stack can be removed by simply deleting the four services above from the stack.external-services
file.
Visualizing logs¶
Logs can be visualized by accessing Kibana at http://<YOUR_IP_ADDRESS>:5601. Kibana usage is out of this manual scope but a guide is available at Kibana documentation.
Alternatives examples¶
Here are some examples of alternatives available instead of the ELK stack provided by default for aggregating logs.
Rsyslog¶
A simple rsyslog aggregation stack can be configured as follows :
version: "3"
networks:
logging:
services:
logspout:
image: gliderlabs/logspout:latest
networks:
- logging
volumes:
- /etc/hostname:/etc/host_hostname:ro
- /var/run/docker.sock:/var/run/docker.sock
command:
# The rsyslog of the poor : sudo socat -u UDP-RECV:5014 STDOUT
syslog://10.110.0.106:514
deploy:
mode: global
resources:
limits:
cpus: '0.20'
memory: 256M
reservations:
cpus: '0.10'
memory: 128M
Remarks¶
Docker containers use by default the json-file logging driver, and the default logging stack is configured to take advantage of it to extract information from the message json structure. This can be configured for each container individually, but be aware that using any other driver than json-file will prevent further usage of
`docker service logs`
and`docker container logs`
commands, as well as Portainer’s logs visualization.When available disk space is too low (less than 95% of available space left) Kibana switches to read-only mode, which can lead to errors like FORBIDDEN/12/index read-only.