Docker – Best way to log to two different CloudWatch log streams from an ECS container

amazon-cloudwatchamazon-ecsamazon-web-servicesdocker

We are running our services on AWS's ECS platform, and we send our logs to AWS CloudWatch.

We have two types of logs, any container can produce either type:

  1. the usual application logs (access, error, whatnot); these must be easily viewable by devs and admins
  2. audit logs (human readable "who did what when" logs); access to these must be restricted

The audit logs are mandated by regulations, and in addition to stricter access control requirements, they have a longer retention time than the app logs, so putting the two in the same log stream is not really an option. So we use two log streams, one in a CloudWatch log group that has a strict access policy.

Currently, we are writing the logs to separate disk files, from where a log agent sends the log entries off to CloudWatch. However, we'd like to switch to "The Docker Way" of logging, that is, write all logs to STDOUT or STDERR, and let a log driver take care of the rest. This sounds particularly attractive, because the log disks are (very nearly) the only disk mounts we are using, and getting rid of them would be Very Nice indeed. (Apart from the log disks, our containers are strictly read-only.)

The problem is, we cannot figure out a sensible way to keep the log streams separate. The obvious thing to do is to somehow tag the log messages and separate them later, but there's still a problem:

  • The sensible way would be to have the log driver separate the messages to different log streams based on the message tags. The awslogs log driver for Docker doesn't support this.
  • The "brute force" way would be to write to a single CloudWatch log stream, and reprocess that stream with a self-written filter that writes to two other log streams. Since CloudWatch billing is based on API calls, this would basically double the costs, and is therefore out of the question.
  • We could possibly also set up a log host, and use another docker log driver (eg. syslog) to send all the logs there. We could then split the log streams, and forward them to CloudWatch. This would add a choke point and a SPOF to all logging, so it doesn't sound too good either.

Hopefully, we are missing something obvious, in which case we'd greatly appreciate the help.

If not, are there any workarounds (or proper solutions, even) to get this kind of thing working?

Best Answer

About the audit log, would you like to share what do you want to audit? In general, for this kind of purpose you may want to use CloudTrail or GuardDuty.