Introduction to Journald and Structured Logging

To start, Journald’s minimalist webpage describes itself as:

[as a] service that collects and stores logging data. It creates and maintains structured, indexed journals.

With one of the sources of logging data being:

Structured log messages

Before diving into journald, what’s structured logging?

Structured Logging

Imagine the following hypothetical log outputs

"INFO: On May 22nd 2017 5:30pm UTC, Nick Foobar accessed gooogle.com/#q=what is love in 52ms"

And

{
  "datetime": "2017-05-22T17:30:00Z",
  "level": "INFO",
  "user": "Nick Foobar",
  "url": "gooogle.com/#q=what is love",
  "responseTime": 52
}

The first example is unstructured or simple, textual logging. The date format makes it a tad contrived, but the log line contains everything in a rather human readable format. From the line, I know when someone accessed what and how fast – in a sentence like structure. If I was tasked with reading log lines all day I would prefer the first output.

The second output is an example of structured logging using JSON. Notice that it conveys the same information, but instead of a sentence, the output are key-value pairs. Considering JSON support is ubiquitous, querying and retrieving values would be trivial in any programming language. Whereas, one would need to meticulously parse the first output to ensure no ambiguity. For instance, not everyone has a given and last name, response time units need to be parsed, the url is arbitrary, timezone conversion, etc. There are way too many pitfalls if one stored their logs in the first format – it would be too hard to consistently analyze.

One could massage their textual log format into a semi-structured output using colons as field delimiters.

"INFO: 2017-05-22T17:30:00Z: Nick Foobar: gooogle.com/#q=what is love: 52"

This may be a happy medium for those unable or unwilling to adopt structured logging, but there are still pitfalls. To give an example if I wanted to find all the log statements with a WARN level, I have to remember to match against only the beginning of the log line or I run the risk of matching against WARN in the user name or in the url. What if I wanted to find all of the searches by Ben Stiller? I’d need to be careful to exclude the lines where people are searching for “Who is Ben Stiller”. These examples are not artificial either as yours truly has fallen victim to several of these mistakes.

Let’s say that one does accomplish a level of insight from the textual format using text manipulations. If the log format were to ever change (eg. transpose response time and url, more data being logged, etc) the log parsing code would break. So if you’re planning on gaining insight from text logs, make sure you define a rigorous standard first!

There is also a nice benefit of a possibility of working with types with structured logging. Instead of working with only strings, JSON also has a numeric type so one doesn’t need the conversion when analyzing.

Structured logging doesn’t need to be JSON, but is a common format in log management suites like graylog, logstash, fluentd, etc.

The only downsides that I’ve seen for structured logging (and specifically JSON structured logging) are log file size increases due to the added keys for disambiguation, and the format won’t be in a grammatically correct English sentence! These seem like minor downsides for the benefit of easier log analysis.

Journald

Now that we’ve established the case for structured logging, now onto journald. Be warned, this is a much more controversial topic.

Journald is the logging component of systemd, which was a rethinking of Linux’s boot and process management. A lot of feathers were ruffled are still ruffled because of the movement towards systemd (1, 2, 3, 4, 5). Wow, so a multitude of complaints. There must be several redeeming qualities to systemd because most distros are converging on it. I won’t be talking about systemd, but rather the logging component.

To put it simply, journald is a structured, binary log that is indexed and rotated. It was introduced in 2011.

Here’s how we would query the log for all messages written by sshd

$  journalctl _COMM=sshd
-- Logs begin at Thu 2017-05-18 23:43:18 EDT, end at Mon 2017-05-22 16:05:29 EDT. --
May 19 16:57:31 vm-ubuntu sshd[19494]: syslogin_perform_logout: logout() returned an error
May 19 16:57:31 vm-ubuntu sshd[19494]: pam_unix(sshd:session): session closed for user nick
May 22 09:03:40 vm-ubuntu sshd[5311]: Accepted password for nick from 192.168.137.1 port 56618 ssh2
May 22 09:03:40 vm-ubuntu sshd[5311]: pam_unix(sshd:session): session opened for user nick by (uid=0)

For all sshd messages since yesterday

$  journalctl -S yesterday  _COMM=sshd
-- Logs begin at Thu 2017-05-18 23:43:18 EDT, end at Mon 2017-05-22 16:10:59 EDT. --
May 22 09:03:40 vm-ubuntu sshd[5311]: Accepted password for nick from 192.168.137.1 port 56618 ssh2
May 22 09:03:40 vm-ubuntu sshd[5311]: pam_unix(sshd:session): session opened for user nick by (uid=0)

To view properties for autossh and sshd messages since yesterday (output truncated to first event)

$  journalctl -o verbose -S yesterday  _COMM=sshd + _COMM=autossh
-- Logs begin at Thu 2017-05-18 23:43:18 EDT, end at Mon 2017-05-22 16:12:45 EDT. --
Mon 2017-05-22 07:01:20.894720 EDT
    PRIORITY=6
    _UID=1000
    _GID=1000
    _CAP_EFFECTIVE=0
    _SYSTEMD_OWNER_UID=1000
    _SYSTEMD_SLICE=user-1000.slice
    _BOOT_ID=...
    _MACHINE_ID=...
    _HOSTNAME=vm-ubuntu
    _TRANSPORT=syslog
    _AUDIT_LOGINUID=1000
    SYSLOG_FACILITY=1
    SYSLOG_IDENTIFIER=autossh
    SYSLOG_PID=46342
    MESSAGE=timeout polling to accept read connection
    _PID=46342
    _COMM=autossh
    _EXE=/usr/lib/autossh/autossh
    _CMDLINE=/usr/lib/autossh/autossh <IP>
    _AUDIT_SESSION=2
    _SYSTEMD_CGROUP=/user.slice/user-1000.slice/session-2.scope
    _SYSTEMD_SESSION=2
    _SYSTEMD_UNIT=session-2.scope
    _SOURCE_REALTIME_TIMESTAMP=...
...

To find all events logged through the journal API for autossh. If a + is included in the command, it means “OR” else entries need to match both expressions

$ journalctl _TRANSPORT=syslog _COMM=autosshd
-- No entries --

Find all possible values written for a given field:

$  journalctl --field=_TRANSPORT
syslog
journal
stdout

What I think about journald

I want journald to be the next big thing. To have one place on your server were all logs are sent to sounds like a pipe dream. No longer do I have to look up where logfiles are stored.

Journald has nice size based log rotation, meaning I no longer have to be woken up at night because a rogue log grew unbounded, which could degrade other services.

Gone are the days of arguing what format logs should be in – these would be replaced with disucssions about what metadata to expose.

With journald I can cut down on the number of external service that each service talks to. Instead of having every service write metrics to carbon, metrics would be written to journald. This way applications don’t need to go through the hoops of proper connection managment: re-connect on every metric sent, a single persistent connection, or some sort of hybrid? By logging to journald, carbon or the log forwarder can be down, but metrics will still be written to the local filesystem. There is very little that would case an absolute data loss.

People can use tools that they are most familiar with: some can use journalctl with the indexes on the local box and a others will want to see the bigger picture once the same logs are aggregated into another system.

Complaints Against journald

  • Journald can’t be used outside of systemd, which limits it to only newer distros that have adopted systemd. I have CentOS 6 servers, so it’s a hard no to use journald on those systems.
  • Journald writes to a binary file that one can’t use standard unix tools to dissect, resulting in difficulty if the log becomes corrupt. If the log is not corrupt, one can pipe the output of journalctl to the standard tools.
  • There’s not a great story for centralizing journald files. The introduction mentioned copying the files to another server. People have found a way using journalctl -o json and sending the output to their favorite log aggregation.
  • A lot of third party plugins for journald ingestion for log management suites don’t appear well maintained.
  • It invented another logging service instead of working with pre-existing tools. Considering Syslog can work with structured data – that’s one less reason to switch to journald.
  • The data format is not standardized or documented well
  • Will not support encryption other than file-system encryption. If a user has access to the file system and has permission to read the log file, all logs will be available.
  • No way to exclude sensitive information from the log (like passwords on the commandline) – though you’re probably doing something wrong if this is an issue.
  • The best way to communicating with journald programmatically seems to be either through the C API or journalctl.

With all these complaints, it may be a wonder why I lean towards advocating journald. By advocating structured data first, journald is setting the tone for the logging ecosystem. Yes, I know that jounald is far from the first, but the simplicity of having single, queryable, structured log baked into the machine is admirable.

Comments

If you'd like to leave a comment, please email [email protected]

2019-01-08 - Anonymous

Thanks for a precise description