How to enable docker execution stats for a job type


Confidence Level TBD  This article has not been reviewed for accuracy, timeliness, or completeness. Check that this information is valid before acting on it.

Confidence Level TBD  This article has not been reviewed for accuracy, timeliness, or completeness. Check that this information is valid before acting on it.

 

Jira ticket: (see https://hysds-core.atlassian.net/browse/HC-137, https://hysds-core.atlassian.net/browse/HC-138, https://hysds-core.atlassian.net/browse/HC-139)

Introduction

This guide will teach you how to enable and visualize docker execution stats for HySDS jobs.

By default when a verdi worker executes a job (i.e. executes the docker command for the job/PGE), it tracks the wall clock run time of the docker command (taking a timestamp before and after to calculate the duration). In addition to this metric, verdi also tracks the number of and size of inputs files that were localized and the number and size of output files that were published. Aside from these, no other resource utilization metrics are recorded.

However there are cases when end users are interested in tracking the resource utilization of a job (docker container execution). In particular, they are interested in the docker container's total CPU utilization and maximum memory usage. The docker stats command is feature of docker that will allow end users to track the live resource utilization of any and all containers: https://docs.docker.com/engine/reference/commandline/stats/. For example:



$ docker stats --all --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}" fervent_panini 5acfcb1b4fd1 drunk_visvesvaraya big_heisenberg CONTAINER CPU % MEM USAGE / LIMIT fervent_panini 0.00% 56KiB / 15.57GiB 5acfcb1b4fd1 0.07% 32.86MiB / 15.57GiB drunk_visvesvaraya 0.00% 0B / 0B big_heisenberg 0.00% 0B / 0B

The problem with utilizing docker stats is that statistics for a docker container is destroyed upon completion of the docker command. To track the docker stats, the verdi worker would need to:

  1. Create a separate thread to run docker stats (or similar functionality using the ephemeral cgroups files) concurrent to the execution of the docker command.

  2. Record the stream of metrics being returned by #1.

  3. Aggregate and dump the metrics after the docker container execution is completed and destroyed.

Per https://www.datadoghq.com/blog/how-to-collect-docker-metrics/, an alternative to extracting the docker statistics of a container's execution is to retrieve them from the pseudo-files available under /sys/fs/cgroup within the container. Utilizing this method, the docker container can itself dump out the metrics collected from the pseudo-files prior to exiting. To enable this, a shim must be installed in the docker container and utilized by the entrypoint to execute the docker command.

As of HySDS framework v3.0.0-rc.6, the shim and its enabling docker entrypoint script are available within the hysds/pge-base and the hysds/cuda-pge-base docker images. The following example shows how to use the shim and an example of the _docker_stats.json file that is dumped and contains the docker stats:



$ mkdir /tmp/test $ cd /tmp/test $ docker pull hysds/pge-base:latest latest: Pulling from hysds/pge-base Digest: sha256:b972b059185d1f2517754eaa87ecc70f6307ca3dec288c5625806d2b8953e87c Status: Image is up to date for hysds/pge-base:latest $ ls -al total 4 drwxrwxr-x 2 hysdsops hysdsops 6 Dec 9 23:50 . drwxrwxrwt. 12 root root 4096 Dec 9 23:50 .. $ docker run --rm -ti -u $ID:$(id -g) -v $(pwd):/home/ops/test -w /home/ops/test hysds/pge-base:latest sleep 5 $ ls -al total 4 drwxrwxr-x 2 hysdsops hysdsops 6 Dec 9 23:50 . drwxrwxrwt. 12 root root 4096 Dec 9 23:50 .. $ docker run --rm -ti -u $ID:$(id -g) -v $(pwd):/home/ops/test \ -w /home/ops/test --entrypoint "/entrypoint-pge-with-stats.sh" \ hysds/pge-base:latest sleep 5 $ ls -al total 8 drwxrwxr-x 2 hysdsops hysdsops 32 Dec 9 23:50 . drwxrwxrwt. 12 root root 4096 Dec 9 23:50 .. -rw-r--r-- 1 root hysdsops 3006 Dec 9 23:50 _docker_stats.json $ cat _docker_stats.json { "wall_time": 5001177284, "user_cpu_time": 846000, "sys_cpu_time": 0, "cgroups": { "cpu_stats": { "cpu_usage": { "total_usage": 162841171, "percpu_usage": [ 61628201, 31858986, 29914333, 39453546 ], "usage_in_kernelmode": 70000000, "usage_in_usermode": 60000000 }, "throttling_data": {} }, "memory_stats": { "cache": 335872, "usage": { "usage": 1433600, "max_usage": 5177344, "failcnt": 0, "limit": 9223372036854771712 }, "swap_usage": { "usage": 1433600, "max_usage": 5177344, "failcnt": 0, "limit": 9223372036854771712 }, "kernel_usage": { "failcnt": 0, "limit": 9223372036854771712 }, "kernel_tcp_usage": { "failcnt": 0, "limit": 9223372036854771712 }, "stats": { "active_anon": 1019904, "active_file": 28672, "cache": 335872, "hierarchical_memory_limit": 9223372036854771712, "hierarchical_memsw_limit": 9223372036854771712, "inactive_anon": 0, "inactive_file": 307200, "mapped_file": 0, "pgfault": 19885, "pgmajfault": 0, "pgpgin": 5280, "pgpgout": 4944, "rss": 1040384, "rss_huge": 0, "swap": 0, "total_active_anon": 1019904, "total_active_file": 28672, "total_cache": 335872, "total_inactive_anon": 0, "total_inactive_file": 307200, "total_mapped_file": 0, "total_pgfault": 19885, "total_pgmajfault": 0, "total_pgpgin": 5280, "total_pgpgout": 4944, "total_rss": 1040384, "total_rss_huge": 0, "total_swap": 0, "total_unevictable": 0, "unevictable": 0 } }, "pids_stats": { "current": 7 }, "blkio_stats": { "io_service_bytes_recursive": [ { "major": 202, "op": "Read" }, { "major": 202, "op": "Write", "value": 461312 }, { "major": 202, "op": "Sync", "value": 461312 }, { "major": 202, "op": "Async" }, { "major": 202, "op": "Total", "value": 461312 } ], "io_serviced_recursive": [ { "major": 202, "op": "Read" }, { "major": 202, "op": "Write", "value": 41 }, { "major": 202, "op": "Sync", "value": 41 }, { "major": 202, "op": "Async" }, { "major": 202, "op": "Total", "value": 41 } ] }, "hugetlb_stats": { "1GB": { "failcnt": 0 }, "2MB": { "failcnt": 0 } } } }



NOTE: the output of the _docker_stats.json file specifies CPU time metrics in terms of nanoseconds and memory usage in terms of bytes. Refer to the "Pseudo-files" section at https://www.datadoghq.com/blog/how-to-collect-docker-metrics/ (reproduced below under the How to enable docker execution stats for a job type#Reference section) for more information.

Requirements

Setup

  1. To enable the dumping of the _docker_stats.json file for a set of HySDS jobs under a repo, edit the docker/Dockerfile so that

    1. the FROM image (base image) is either hysds/pge-base:latest or hysds/cuda-pge-base:latest

    2. the ENTRYPOINT is set to `/entrypoint-pge-with-stats.sh`

  2. The following example shows the docker/Dockerfile for the HySDS core repo lightweight-jobs

    FROM hysds/pge-base:latest MAINTAINER malarout "Namrata.Malarout@jpl.nasa.gov" LABEL description="Lightweight System Jobs" # provision lightweight-jobs PGE USER ops COPY . /home/ops/verdi/ops/lightweight-jobs # set entrypoint ENTRYPOINT ["/entrypoint-pge-with-stats.sh"] WORKDIR /home/ops CMD ["/bin/bash", "--login"]



  3. Rebuild and redeploy the docker container using your CI instance.

  4. You're done. Because you're HySDS cluster was installed with using v3.0.0-rc.6 or greater:

    1. the verdi job worker will detect the existence of _docker_stats.json files in the job work directory and publish the stats with the job status on mozart and the job metric info on metrics

    2. your Kibana job metrics dashboard will show the visualizations for total CPU usage and max memory usage for each job type, e.g.:



Reference

Source: https://www.datadoghq.com/blog/how-to-collect-docker-metrics/

Docker exposes metrics via three mechanisms: pseudo-files in sysfs, the stats command, and API. Metrics coverage across these three mechanisms is uneven, as seen below:

Pseudo-files

Docker metrics reported via pseudo-files in sysfs by default do not require privileged (root) access. They are also the fastest and most lightweight way to read metrics; if you are monitoring many containers per host, speed may become a requirement. However, you cannot collect all metrics from pseudo-files. As seen in the table above, there may be limitations on I/O and network metrics.



Pseudo-file location

This article assumes your metrics pseudo-files are located in /sys/fs/cgroup in the host OS. In some systems, they may be in /cgroup instead.

Your pseudo-file access path includes the long id of your container. For illustration purposes this article assumes that your have set an env variable CONTAINER_ID to the long ID of the container you are monitoring. If you’d like to copy-paste run commands in this article, you can set CONTAINER_ID like this: CONTAINER_ID=$(docker run [OPTIONS] IMAGE [COMMAND] [ARG...] ) or you can save it after launching: docker ps --no-trunc and then copy-paste and save the long ID as an env variable like CONTAINER_ID=<long ID>

CPU pseudo-files

CPU metrics are reported in cpu and cpuacct (CPU accumulated).

OS-specific metric paths

In the commands below, we use the metric directory for standard Linux systems (/sys/fs/cgroup/cpuacct/docker/$CONTAINER_ID/).

Usage

If you’re using an x86 system, the times above are expressed in 10-millisecond increments, so the recently-booted container above has spent 24.51s running user processes, and 9.66s on system calls. (Technically the times are expressed in user jiffies. Deep jiffy info here.)

CPU Usage per core

Per-CPU usage can help you identify core imbalances, which can be caused by bad configuration.

If your container is using multiple CPU cores and you want a convenient total usage number, you can run:

Throttled CPU

If you set a limit on the CPU time available to a container with CPU quota constraint, your container will be throttled when it attempts to exceed the limit.

Memory pseudo-files

The following command will print a lot of information of memory usage, probably more than you need. Note that the first half of the measures have no standard prefix; these measures exclude sub-cgroups. The second half all are prefixed with “total_”; these measures include sub-cgroups.

You can get most interesting memory metrics directly by calling a specific command in the /sys/fs/cgroup/memory/docker/$CONTAINER_ID/ directory:

Note that if the final command returns a long garbage number like 18446744073709551615, you did not set the limit when you launched the container. To set a 500MB limit, for example:

Further information about the memory metrics can be found in the official documentation.

I/O pseudo-files

The path to I/O stats pseudo-files for most operating systems is: /sys/fs/cgroup/blkio/docker/$CONTAINER_ID/.

Depending on your system, you may have many metrics available from these pseudo-files: blkio.io_queued_recursive, blkio.io_service_time_recursive, blkio.io_wait_time_recursive and more.

On many systems, however, many of these pseudo-files only return zero values. In this case there are usually still two pseudo-files that work: blkio.throttle.io_service_bytes and blkio.throttle.io_serviced, which report total I/O bytes and operations, respectively. Contrary to their names, these numbers do not report throttled I/O but actual I/O bytes and ops.

The first two numbers reported by these pseudo-files are the major:minor device IDs, which uniquely identify a device. Example output from blkio.throttle.io_service_bytes:

Network pseudo-files

Docker version 1.6.1 and greater

In release 1.6.1, Docker fixed read/write /proc paths.

Older versions of Docker

You can get network metrics from ip netns, with some symlinking:

Stats command

The docker stats command will continuously report a live stream of basic CPU, memory, and network metrics. As of version 1.9.0, docker stats also includes disk I/O metrics.

CPU stats

CPU is reported as % of total host capacity. So if you have two containers each using as much CPU as they can, each allocated the same CPU shares by docker, then the stat command for each would register 50% utilization, though in practice their CPU resources would be fully utilized.

Memory stats

If you do not explicitly set the memory limits for the container, then the memory usage limit will be the memory limit of the host machine. If the host is using memory for other processes, your container will run out of memory before it hits the limit reported by the stats command.

I/O stats

As of Docker version 1.9.0, docker stats now displays total bytes read and written.

Network stats

Displays total bytes received (RX) and transmitted (TX).

Requirements

  1. Docker version 1.5.0 (released February 2015) or higher

  2. Exec driver ‘libcontainer’, which has been the default since Docker 0.9.

API

Like the docker stats command, the API will continuously report a live stream of CPU, memory, I/O, and network metrics. The difference is that the API provides far more detail than the stats command.

The daemon listens on unix:///var/run/docker.sock to allow only local connections by the root user. When you launch Docker, however, you can bind it to another port or socket; instructions and strong warnings are here. This article describes how to access the API on the default socket.

You can send commands to the socket with nc. All API calls will take this general form:

To collect all metrics in a continuously updated live stream of JSON, run:

The response will be long, live-streaming chunks of JSON with metrics about the container. Rather than print an entire example JSON object here, its parts are discussed individually below.

CPU

system_cpu_usage represents the host’s cumulative CPU usage in nanoseconds; this includes user, system, idle, etc. (the sum of the /proc/stat CPU line).

All other CPU metrics can also be accessed through pseudo-files, as described above, with a few differences:

  • usage_in_kernelmode is the same as system CPU usage reported by pseudo-files, although the API expresses this value in nanoseconds rather than 10-millisecond increments. As you can see, in the example reading in this article, both methods report the same number: 9.66s

  • usage_in_usermode is the same as user CPU usage reported by pseudo-files. As above, this number is reported in nanoseconds.

Memory

Most of the memory stats available through the API are also available through the pseudo-files as described in that section above. usage is memory.usage_in_bytes, max_usage is memory.max_usage_in_bytes, stats is memory.stat pseudo-file, limit is the memory limit set on the container memory.limit_in_bytes, if it is set; otherwise limit is the host memory limit in /proc/meminfo (MemTotal).

I/O

The API currently reports a count of read, write, sync, and async operations, plus a total count of operations in blkio_stats.io_serviced_recursive. The total bytes corresponding to those operations are reported in blkio_stats.io_service_bytes_recursive. Depending on your system, other I/O stats may also be reported, or may be disabled (empty). Major and minor IDs uniquely identify a device.

Network

The API is the easiest way to get network metrics for your container. (RX represents “received”, and TX represents “transmitted”.)

Selecting specific Docker metrics

By sending output from the API to grep to throw out non-JSON rows, and then to jq for JSON parsing, we can create a stream of selected metrics. Some examples are below.

CPU stats

IO bytes written

Network bytes received

API requirements

Same as the stats command, above.

Additional API calls

Other useful Docker API calls are documented here. You can call them using nc as described in above.








Related Articles:

Related Articles:

Have Questions? Ask a HySDS Developer:

Anyone can join our public Slack channel to learn more about HySDS. JPL employees can join #HySDS-Community

JPLers can also ask HySDS questions at Stack Overflow Enterprise

Search HySDS Wiki

Page Information:

Page Information:

Was this page useful?

Yes No

Contribution History:

Subject Matter Expert:

@Gerald Manipon

Find an Error?

Is this document outdated or inaccurate? Please contact the assigned Page Maintainer:

@Gerald Manipon



Note: JPL employees can also get answers to HySDS questions at Stack Overflow Enterprise: