Table of contents
The awslogs
logging driver sends container logs toAmazon CloudWatch Logs.Log entries can be retrieved through theAWS ManagementConsole or theAWS SDKsand Command Line Tools.
To use the awslogs
driver as the default logging driver, set the log-driver
and log-opt
keys to appropriate values in the daemon.json
file, which islocated in /etc/docker/
on Linux hosts orC:\ProgramData\docker\config\daemon.json
on Windows Server. For more aboutconfiguring Docker using daemon.json
, seedaemon.json.The following example sets the log driver to awslogs
and sets theawslogs-region
option.
{ "log-driver": "awslogs", "log-opts": { "awslogs-region": "us-east-1" }}
Restart Docker for the changes to take effect.
You can set the logging driver for a specific container by using the--log-driver
option to docker run
:
$ docker run --log-driver=awslogs ...
If you are using Docker Compose, set awslogs
using the following declaration example:
myservice: logging: driver: awslogs options: awslogs-region: us-east-1
Amazon CloudWatch Logs options
You can add logging options to the daemon.json
to set Docker-wide defaults,or use the --log-opt NAME=VALUE
flag to specify Amazon CloudWatch Logslogging driver options when starting a container.
awslogs-region
The awslogs
logging driver sends your Docker logs to a specific region. Usethe awslogs-region
log option or the AWS_REGION
environment variable to setthe region. By default, if your Docker daemon is running on an EC2 instanceand no region is set, the driver uses the instance's region.
$ docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 ...
awslogs-endpoint
By default, Docker uses either the awslogs-region
log option or thedetected region to construct the remote CloudWatch Logs API endpoint.Use the awslogs-endpoint
log option to override the default endpointwith the provided endpoint.
Note
The
awslogs-region
log option or detected region controls theregion used for signing. You may experience signature errors if theendpoint you've specified withawslogs-endpoint
uses a different region.
awslogs-group
You must specify alog groupfor the awslogs
logging driver. You can specify the log group with theawslogs-group
log option:
$ docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=myLogGroup ...
awslogs-stream
To configure whichlog streamshould be used, you can specify the awslogs-stream
log option. If notspecified, the container ID is used as the log stream.
Note
Log streams within a given log group should only be used by one containerat a time. Using the same log stream for multiple containers concurrentlycan cause reduced logging performance.
awslogs-create-group
Log driver returns an error by default if the log group doesn't exist. However, you can set theawslogs-create-group
to true
to automatically create the log group as needed.The awslogs-create-group
option defaults to false
.
$ docker run \ --log-driver=awslogs \ --log-opt awslogs-region=us-east-1 \ --log-opt awslogs-group=myLogGroup \ --log-opt awslogs-create-group=true \ ...
Note
Your AWS IAM policy must include the
logs:CreateLogGroup
permission beforeyou attempt to useawslogs-create-group
.
awslogs-datetime-format
The awslogs-datetime-format
option defines a multi-line start pattern inPythonstrftime
format. A log message consists of a line thatmatches the pattern and any following lines that don't match the pattern. Thusthe matched line is the delimiter between log messages.
One example of a use case for usingthis format is for parsing output such as a stack dump, which might otherwisebe logged in multiple entries. The correct pattern allows it to be captured in asingle entry.
This option always takes precedence if both awslogs-datetime-format
andawslogs-multiline-pattern
are configured.
Note
Multi-line logging performs regular expression parsing and matching of all logmessages, which may have a negative impact on logging performance.
Consider the following log stream, where new log messages start with atimestamp:
[May 01, 2017 19:00:01] A message was logged[May 01, 2017 19:00:04] Another multi-line message was loggedSome random messagewith some random words[May 01, 2017 19:01:32] Another message was logged
The format can be expressed as a strftime
expression of[%b %d, %Y %H:%M:%S]
, and the awslogs-datetime-format
value can be set tothat expression:
$ docker run \ --log-driver=awslogs \ --log-opt awslogs-region=us-east-1 \ --log-opt awslogs-group=myLogGroup \ --log-opt awslogs-datetime-format='\[%b %d, %Y %H:%M:%S\]' \ ...
This parses the logs into the following CloudWatch log events:
# First event[May 01, 2017 19:00:01] A message was logged# Second event[May 01, 2017 19:00:04] Another multi-line message was loggedSome random messagewith some random words# Third event[May 01, 2017 19:01:32] Another message was logged
The following strftime
codes are supported:
Code | Meaning | Example |
---|---|---|
%a | Weekday abbreviated name. | Mon |
%A | Weekday full name. | Monday |
%w | Weekday as a decimal number where 0 is Sunday and 6 is Saturday. | 0 |
%d | Day of the month as a zero-padded decimal number. | 08 |
%b | Month abbreviated name. | Feb |
%B | Month full name. | February |
%m | Month as a zero-padded decimal number. | 02 |
%Y | Year with century as a decimal number. | 2008 |
%y | Year without century as a zero-padded decimal number. | 08 |
%H | Hour (24-hour clock) as a zero-padded decimal number. | 19 |
%I | Hour (12-hour clock) as a zero-padded decimal number. | 07 |
%p | AM or PM. | AM |
%M | Minute as a zero-padded decimal number. | 57 |
%S | Second as a zero-padded decimal number. | 04 |
%L | Milliseconds as a zero-padded decimal number. | .123 |
%f | Microseconds as a zero-padded decimal number. | 000345 |
%z | UTC offset in the form +HHMM or -HHMM. | +1300 |
%Z | Time zone name. | PST |
%j | Day of the year as a zero-padded decimal number. | 363 |
awslogs-multiline-pattern
The awslogs-multiline-pattern
option defines a multi-line start pattern using aregular expression. A log message consists of a line that matches the patternand any following lines that don't match the pattern. Thus the matched line isthe delimiter between log messages.
This option is ignored if awslogs-datetime-format
is also configured.
Note
Multi-line logging performs regular expression parsing and matching of all logmessages. This may have a negative impact on logging performance.
Consider the following log stream, where each log message should start with thepattern INFO
:
INFO A message was loggedINFO Another multi-line message was logged Some random messageINFO Another message was logged
You can use the regular expression of ^INFO
:
$ docker run \ --log-driver=awslogs \ --log-opt awslogs-region=us-east-1 \ --log-opt awslogs-group=myLogGroup \ --log-opt awslogs-multiline-pattern='^INFO' \ ...
This parses the logs into the following CloudWatch log events:
# First eventINFO A message was logged# Second eventINFO Another multi-line message was logged Some random message# Third eventINFO Another message was logged
tag
Specify tag
as an alternative to the awslogs-stream
option. tag
interpretsGo template markup, such as {{.ID}}
, {{.FullID}}
or {{.Name}}
docker.{{.ID}}
. Seethetag option documentation for details on supported templatesubstitutions.
When both awslogs-stream
and tag
are specified, the value supplied forawslogs-stream
overrides the template specified with tag
.
If not specified, the container ID is used as the log stream.
Note
The CloudWatch log API doesn't support
:
in the log name. This can causesome issues when using the{{ .ImageName }}
as a tag,since a Docker image has a format ofIMAGE:TAG
, such asalpine:latest
.Template markup can be used to get the proper format. To get the image nameand the first 12 characters of the container ID, you can use:--log-opt tag='{{ with split .ImageName ":" }}{{join . "_"}}{{end}}-{{.ID}}'
the output is something like:
alpine_latest-bf0072049c76
awslogs-force-flush-interval-seconds
The awslogs
driver periodically flushes logs to CloudWatch.
The awslogs-force-flush-interval-seconds
option changes log flush interval seconds.
Default is 5 seconds.
awslogs-max-buffered-events
The awslogs
driver buffers logs.
The awslogs-max-buffered-events
option changes log buffer size.
Default is 4K.
You must provide AWS credentials to the Docker daemon to use the awslogs
logging driver. You can provide these credentials with the AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
, and AWS_SESSION_TOKEN
environment variables, thedefault AWS shared credentials file (~/.aws/credentials
of the root user), orif you are running the Docker daemon on an Amazon EC2 instance, the Amazon EC2instance profile.
Credentials must have a policy applied that allows the logs:CreateLogStream
and logs:PutLogEvents
actions, as shown in the following example.
{ "Version": "2012-10-17", "Statement": [ { "Action": ["logs:CreateLogStream", "logs:PutLogEvents"], "Effect": "Allow", "Resource": "*" } ]}