×

In a logging deployment, container and infrastructure logs are forwarded to the internal log store defined in the ClusterLogging custom resource (CR) by default.

Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured.

If this default configuration meets your needs, you do not need to configure a ClusterLogForwarder CR. If a ClusterLogForwarder CR exists, logs are not forwarded to the internal log store unless a pipeline is defined that contains the default output.

About forwarding logs to third-party systems

To send logs to specific endpoints inside and outside your Red Hat OpenShift Service on AWS cluster, you specify a combination of outputs and pipelines in a ClusterLogForwarder custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes Secret object.

pipeline

Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:

  • application. Container logs generated by user applications running in the cluster, except infrastructure container applications.

  • infrastructure. Container logs from pods that run in the openshift*, kube*, or default projects and journal logs sourced from node file system.

  • audit. Audit logs generated by the node audit system, auditd, Kubernetes API server, OpenShift API server, and OVN network.

You can add labels to outbound log messages by using key:value pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.

input

Forwards the application logs associated with a specific project to a pipeline.

In the pipeline, you define which log types to forward using an inputRef parameter and where to forward the logs to using an outputRef parameter.

Secret

A key:value map that contains confidential data such as user credentials.

Note the following:

  • If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the application and audit types, but do not specify a pipeline for the infrastructure type, infrastructure logs are dropped.

  • You can use multiple types of outputs in the ClusterLogForwarder custom resource (CR) to send logs to servers that support different protocols.

The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the my-apps-logs project to the internal Elasticsearch instance.

Sample log forwarding outputs and pipelines
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
  name: <log_forwarder_name> (1)
  namespace: <log_forwarder_namespace> (2)
spec:
  serviceAccountName: <service_account_name> (3)
  outputs:
   - name: elasticsearch-secure (4)
     type: "elasticsearch"
     url: https://elasticsearch.secure.com:9200
     secret:
        name: elasticsearch
   - name: elasticsearch-insecure (5)
     type: "elasticsearch"
     url: http://elasticsearch.insecure.com:9200
   - name: kafka-app (6)
     type: "kafka"
     url: tls://kafka.secure.com:9093/app-topic
  inputs: (7)
   - name: my-app-logs
     application:
        namespaces:
        - my-project
  pipelines:
   - name: audit-logs (8)
     inputRefs:
      - audit
     outputRefs:
      - elasticsearch-secure
      - default
     labels:
       secure: "true" (9)
       datacenter: "east"
   - name: infrastructure-logs (10)
     inputRefs:
      - infrastructure
     outputRefs:
      - elasticsearch-insecure
     labels:
       datacenter: "west"
   - name: my-app (11)
     inputRefs:
      - my-app-logs
     outputRefs:
      - default
   - inputRefs: (12)
      - application
     outputRefs:
      - kafka-app
     labels:
       datacenter: "south"
1 In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
2 In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
4 Configuration for an secure Elasticsearch output using a secret with a secure URL.
  • A name to describe the output.

  • The type of output: elasticsearch.

  • The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.

  • The secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project.

5 Configuration for an insecure Elasticsearch output:
  • A name to describe the output.

  • The type of output: elasticsearch.

  • The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.

6 Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL:
  • A name to describe the output.

  • The type of output: kafka.

  • Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix.

7 Configuration for an input to filter application logs from the my-project namespace.
8 Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance:
  • A name to describe the pipeline.

  • The inputRefs is the log type, in this example audit.

  • The outputRefs is the name of the output to use, in this example elasticsearch-secure to forward to the secure Elasticsearch instance and default to forward to the internal Elasticsearch instance.

  • Optional: Labels to add to the logs.

9 Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
10 Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
11 Configuration for a pipeline to send logs from the my-project project to the internal Elasticsearch instance.
  • A name to describe the pipeline.

  • The inputRefs is a specific input: my-app-logs.

  • The outputRefs is default.

  • Optional: String. One or more labels to add to the logs.

12 Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
  • The inputRefs is the log type, in this example application.

  • The outputRefs is the name of the output to use.

  • Optional: String. One or more labels to add to the logs.

Fluentd log handling when the external log aggregator is unavailable

If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. Red Hat OpenShift Service on AWS rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.

Supported Authorization Keys

Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations.

Transport Layer Security (TLS)

Using a TLS URL (http://... or ssl://...) without a secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a secret and setting the following optional fields:

  • passphrase: (string) Passphrase to decode an encoded TLS private key. Requires tls.key.

  • ca-bundle.crt: (string) File name of a customer CA for server authentication.

Username and Password
  • username: (string) Authentication user name. Requires password.

  • password: (string) Authentication password. Requires username.

Simple Authentication Security Layer (SASL)
  • sasl.enable (boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the other sasl. keys are set.

  • sasl.mechanisms: (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used.

  • sasl.allow-insecure: (boolean) Allow mechanisms that send clear-text passwords. Defaults to false.

Creating a Secret

You can create a secret in the directory that contains your certificate and key files by using the following command:

$ oc create secret generic -n <namespace> <secret_name> \
  --from-file=ca-bundle.crt=<your_bundle_file> \
  --from-literal=username=<your_username> \
  --from-literal=password=<your_password>

Generic or opaque secrets are recommended for best results.

Creating a log forwarder

To create a log forwarder, you must create a ClusterLogForwarder CR that specifies the log input types that the service account can collect. You can also specify which outputs the logs can be forwarded to. If you are using the multi log forwarder feature, you must also reference the service account in the ClusterLogForwarder CR.

If you are using the multi log forwarder feature on your cluster, you can create ClusterLogForwarder custom resources (CRs) in any namespace, using any name. If you are using a legacy implementation, the ClusterLogForwarder CR must be named instance, and must be created in the openshift-logging namespace.

You need administrator permissions for the namespace where you create the ClusterLogForwarder CR.

ClusterLogForwarder resource example
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: <log_forwarder_name> (1)
  namespace: <log_forwarder_namespace> (2)
spec:
  serviceAccountName: <service_account_name> (3)
  pipelines:
   - inputRefs:
     - <log_type> (4)
     outputRefs:
     - <output_name> (5)
  outputs:
  - name: <output_name> (6)
    type: <output_type> (5)
    url: <log_output_url> (7)
# ...
1 In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
2 In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
4 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application.
5 The type of output that you want to forward logs to. The value of this field can be default, loki, kafka, elasticsearch, fluentdForward, syslog, or cloudwatch.

The default output type is not supported in mutli log forwarder implementations.

6 A name for the output that you want to forward logs to.
7 The URL of the output that you want to forward logs to.

Tuning log payloads and delivery

In logging 5.9 and newer versions, the tuning spec in the ClusterLogForwarder custom resource (CR) provides a means of configuring your deployment to prioritize either throughput or durability of logs.

For example, if you need to reduce the possibility of log loss when the collector restarts, or you require collected log messages to survive a collector restart to support regulatory mandates, you can tune your deployment to prioritize log durability. If you use outputs that have hard limitations on the size of batches they can receive, you may want to tune your deployment to prioritize log throughput.

To use this feature, your logging deployment must be configured to use the Vector collector. The tuning spec in the ClusterLogForwarder CR is not supported when using the Fluentd collector.

The following example shows the ClusterLogForwarder CR options that you can modify to tune log forwarder outputs:

Example ClusterLogForwarder CR tuning options
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
  tuning:
    delivery: AtLeastOnce (1)
    compression: none (2)
    maxWrite: <integer> (3)
    minRetryDuration: 1s (4)
    maxRetryDuration: 1s (5)
# ...
1 Specify the delivery mode for log forwarding.
  • AtLeastOnce delivery means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash.

  • AtMostOnce delivery means that the log forwarder makes no effort to recover logs lost during a crash. This mode gives better throughput, but may result in greater log loss.

2 Specifying a compression configuration causes data to be compressed before it is sent over the network. Note that not all output types support compression, and if the specified compression type is not supported by the output, this results in an error. The possible values for this configuration are none for no compression, gzip, snappy, zlib, or zstd. lz4 compression is also available if you are using a Kafka output. See the table "Supported compression types for tuning outputs" for more information.
3 Specifies a limit for the maximum payload of a single send operation to the output.
4 Specifies a minimum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (ms), seconds (s), or minutes (m).
5 Specifies a maximum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (ms), seconds (s), or minutes (m).
Table 1. Supported compression types for tuning outputs
Compression algorithm Splunk Amazon Cloudwatch Elasticsearch 8 LokiStack Apache Kafka HTTP Syslog Google Cloud Microsoft Azure Monitoring

gzip

X

X

X

X

X

snappy

X

X

X

X

zlib

X

X

X

zstd

X

X

X

lz4

X

Enabling multi-line exception detection

Enables multi-line error detection of container logs.

Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.

Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.

Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
    at testjava.Main.handle(Main.java:47)
    at testjava.Main.printMe(Main.java:19)
    at testjava.Main.main(Main.java:10)
  • To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field, with a value of true.

Example ClusterLogForwarder CR
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
spec:
  pipelines:
    - name: my-app-logs
      inputRefs:
        - application
      outputRefs:
        - default
      detectMultilineErrors: true

Details

When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.

Table 2. Supported languages per collector
Language Fluentd Vector

Java

JS

Ruby

Python

Golang

PHP

Dart

Troubleshooting

When enabled, the collector configuration will include a new section with type: detect_exceptions

Example vector configuration section
[transforms.detect_exceptions_app-logs]
 type = "detect_exceptions"
 inputs = ["application"]
 languages = ["All"]
 group_by = ["kubernetes.namespace_name","kubernetes.pod_name","kubernetes.container_name"]
 expire_after_ms = 2000
 multiline_flush_interval_ms = 1000
Example fluentd config section
<label @MULTILINE_APP_LOGS>
  <match kubernetes.**>
    @type detect_exceptions
    remove_tag_prefix 'kubernetes'
    message message
    force_line_breaks true
    multiline_flush_interval .2
  </match>
</label>

Forwarding logs to Splunk

You can forward logs to the Splunk HTTP Event Collector (HEC) in addition to, or instead of, the internal default Red Hat OpenShift Service on AWS log store.

Using this feature with Fluentd is not supported.

Prerequisites
  • Red Hat OpenShift Logging Operator 5.6 or later

  • A ClusterLogging instance with vector specified as the collector

  • Base64 encoded Splunk HEC token

Procedure
  1. Create a secret using your Base64 encoded Splunk HEC token.

    $ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>
  2. Create or edit the ClusterLogForwarder Custom Resource (CR) using the template below:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> (1)
      namespace: <log_forwarder_namespace> (2)
    spec:
      serviceAccountName: <service_account_name> (3)
      outputs:
        - name: splunk-receiver (4)
          secret:
            name: vector-splunk-secret (5)
          type: splunk (6)
          url: <http://your.splunk.hec.url:8088> (7)
      pipelines: (8)
        - inputRefs:
            - application
            - infrastructure
          name: (9)
          outputRefs:
            - splunk-receiver (10)
    1 In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2 In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4 Specify a name for the output.
    5 Specify the name of the secret that contains your HEC token.
    6 Specify the output type as splunk.
    7 Specify the URL (including port) of your Splunk HEC.
    8 Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    9 Optional: Specify a name for the pipeline.
    10 Specify the name of the output to use when forwarding logs with this pipeline.

Forwarding logs over HTTP

Forwarding logs over HTTP is supported for both the Fluentd and Vector log collectors. To enable, specify http as the output type in the ClusterLogForwarder custom resource (CR).

Procedure
  • Create or edit the ClusterLogForwarder CR using the template below:

    Example ClusterLogForwarder CR
    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> (1)
      namespace: <log_forwarder_namespace> (2)
    spec:
      serviceAccountName: <service_account_name> (3)
      outputs:
        - name: httpout-app
          type: http
          url: (4)
          http:
            headers: (5)
              h1: v1
              h2: v2
            method: POST
          secret:
            name: (6)
          tls:
            insecureSkipVerify: (7)
      pipelines:
        - name:
          inputRefs:
            - application
          outputRefs:
            - (8)
    1 In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2 In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4 Destination address for logs.
    5 Additional headers to send with the log record.
    6 Secret name for destination credentials.
    7 Values are either true or false.
    8 This value should be the same as the output name.

Forwarding to Azure Monitor Logs

With logging 5.9 and later, you can forward logs to Azure Monitor Logs in addition to, or instead of, the default log store. This functionality is provided by the Vector Azure Monitor Logs sink.

Prerequisites
  • You are familiar with how to administer and create a ClusterLogging custom resource (CR) instance.

  • You are familiar with how to administer and create a ClusterLogForwarder CR instance.

  • You understand the ClusterLogForwarder CR specifications.

  • You have basic familiarity with Azure services.

  • You have an Azure account configured for Azure Portal or Azure CLI access.

  • You have obtained your Azure Monitor Logs primary or the secondary security key.

  • You have determined which log types to forward.

To enable log forwarding to Azure Monitor Logs via the HTTP Data Collector API:

Create a secret with your shared key:

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
  namespace: openshift-logging
type: Opaque
data:
  shared_key: <your_shared_key> (1)
1 Must contain a primary or secondary key for the Log Analytics workspace making the request.

To obtain a shared key, you can use this command in Azure CLI:

Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName "<resource_name>" -Name "<workspace_name>”

Create or edit your ClusterLogForwarder CR using the template matching your log selection.

Forward all logs
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
  name: instance
  namespace: openshift-logging
spec:
  outputs:
  - name: azure-monitor
    type: azureMonitor
    azureMonitor:
      customerId: my-customer-id (1)
      logType: my_log_type (2)
    secret:
       name: my-secret
  pipelines:
  - name: app-pipeline
    inputRefs:
    - application
    outputRefs:
    - azure-monitor
1 Unique identifier for the Log Analytics workspace. Required field.
2 Azure record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters.
Forward application and infrastructure logs
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
  name: instance
  namespace: openshift-logging
spec:
  outputs:
  - name: azure-monitor-app
    type: azureMonitor
    azureMonitor:
      customerId: my-customer-id
      logType: application_log (1)
    secret:
      name: my-secret
  - name: azure-monitor-infra
    type: azureMonitor
    azureMonitor:
      customerId: my-customer-id
      logType: infra_log #
    secret:
      name: my-secret
  pipelines:
    - name: app-pipeline
      inputRefs:
      - application
      outputRefs:
      - azure-monitor-app
    - name: infra-pipeline
      inputRefs:
      - infrastructure
      outputRefs:
      - azure-monitor-infra
1 Azure record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters.
Advanced configuration options
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
  name: instance
  namespace: openshift-logging
spec:
  outputs:
  - name: azure-monitor
    type: azureMonitor
    azureMonitor:
      customerId: my-customer-id
      logType: my_log_type
      azureResourceId: "/subscriptions/111111111" (1)
      host: "ods.opinsights.azure.com" (2)
    secret:
       name: my-secret
  pipelines:
   - name: app-pipeline
    inputRefs:
    - application
    outputRefs:
    - azure-monitor
1 Resource ID of the Azure resource the data should be associated with. Optional field.
2 Alternative host for dedicated Azure regions. Optional field. Default value is ods.opinsights.azure.com. Default value for Azure Government is ods.opinsights.azure.us.

Forwarding application logs from specific projects

You can forward a copy of the application logs from specific projects to an external log aggregator, in addition to, or instead of, using the internal log store. You must also configure the external log aggregator to receive log data from Red Hat OpenShift Service on AWS.

To configure forwarding application logs from a project, you must create a ClusterLogForwarder custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs.

Prerequisites
  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure
  1. Create or edit a YAML file that defines the ClusterLogForwarder CR:

    Example ClusterLogForwarder CR
    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance (1)
      namespace: openshift-logging (2)
    spec:
      outputs:
       - name: fluentd-server-secure (3)
         type: fluentdForward (4)
         url: 'tls://fluentdserver.security.example.com:24224' (5)
         secret: (6)
            name: fluentd-secret
       - name: fluentd-server-insecure
         type: fluentdForward
         url: 'tcp://fluentdserver.home.example.com:24224'
      inputs: (7)
       - name: my-app-logs
         application:
            namespaces:
            - my-project (8)
      pipelines:
       - name: forward-to-fluentd-insecure (9)
         inputRefs: (10)
         - my-app-logs
         outputRefs: (11)
         - fluentd-server-insecure
         labels:
           project: "my-project" (12)
       - name: forward-to-fluentd-secure (13)
         inputRefs:
         - application (14)
         - audit
         - infrastructure
         outputRefs:
         - fluentd-server-secure
         - default
         labels:
           clusterId: "C1234"
    1 The name of the ClusterLogForwarder CR must be instance.
    2 The namespace for the ClusterLogForwarder CR must be openshift-logging.
    3 The name of the output.
    4 The output type: elasticsearch, fluentdForward, syslog, or kafka.
    5 The URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    6 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and have tls.crt, tls.key, and ca-bundle.crt keys that each point to the certificates they represent.
    7 The configuration for an input to filter application logs from the specified projects.
    8 If no namespace is specified, logs are collected from all namespaces.
    9 The pipeline configuration directs logs from a named input to a named output. In this example, a pipeline named forward-to-fluentd-insecure forwards logs from an input named my-app-logs to an output named fluentd-server-insecure.
    10 A list of inputs.
    11 The name of the output to use.
    12 Optional: String. One or more labels to add to the logs.
    13 Configuration for a pipeline to send logs to other log aggregators.
    • Optional: Specify a name for the pipeline.

    • Specify which log types to forward by using the pipeline: application, infrastructure, or audit.

    • Specify the name of the output to use when forwarding logs with this pipeline.

    • Optional: Specify the default output to forward logs to the default log store.

    • Optional: String. One or more labels to add to the logs.

    14 Note that application logs from all namespaces are collected when using this configuration.
  2. Apply the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml

Forwarding application logs from specific pods

As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.

Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.

To specify the pod labels, you use one or more matchLabels key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected.

Procedure
  1. Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the pod labels using simple equality-based selectors under inputs[].name.application.selector.matchLabels, as shown in the following example.

    Example ClusterLogForwarder CR YAML file
    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> (1)
      namespace: <log_forwarder_namespace> (2)
    spec:
      pipelines:
        - inputRefs: [ myAppLogData ] (3)
          outputRefs: [ default ] (4)
      inputs: (5)
        - name: myAppLogData
          application:
            selector:
              matchLabels: (6)
                environment: production
                app: nginx
            namespaces: (7)
            - app1
            - app2
      outputs: (8)
        - <output_name>
        ...
    1 In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2 In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3 Specify one or more comma-separated values from inputs[].name.
    4 Specify one or more comma-separated values from outputs[].
    5 Define a unique inputs[].name for each application that has a unique set of pod labels.
    6 Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
    7 Optional: Specify one or more namespaces.
    8 Specify one or more outputs to forward your log data to.
  2. Optional: To restrict the gathering of log data to specific namespaces, use inputs[].name.application.namespaces, as shown in the preceding example.

  3. Optional: You can send log data from additional applications that have different pod labels to the same pipeline.

    1. For each unique combination of pod labels, create an additional inputs[].name section similar to the one shown.

    2. Update the selectors to match the pod labels of this application.

    3. Add the new inputs[].name value to inputRefs. For example:

      - inputRefs: [ myAppLogData, myOtherAppLogData ]
  4. Create the CR object:

    $ oc create -f <file-name>.yaml
Additional resources

Overview of API audit filter

OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, checking stops at the first match. How much data is included in an event is determined by the value of the level field:

  • None: The event is dropped.

  • Metadata: Audit metadata is included, request and response bodies are removed.

  • Request: Audit metadata and the request body are included, the response body is removed.

  • RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster.

You can use this feature only if the Vector collector is set up in your logging deployment.

In logging 5.8 and later, the ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions:

Wildcards

Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status.

Default Rules

Events that do not match any rule in the policy are filtered as follows:

  • Read-only system events such as get, list, watch are dropped.

  • Service account write events that occur within the same namespace as the service account are dropped.

  • All other events are forwarded, subject to any configured rate limits.

To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule.

Omit Response Codes

A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, a list of HTTP status code for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted.

The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Service on AWS audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards, and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store, and a less detailed stream to a remote site.

The example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.

Example audit policy
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
spec:
  pipelines:
    - name: my-pipeline
      inputRefs: audit (1)
      filterRefs: my-policy (2)
      outputRefs: default
  filters:
    - name: my-policy
      type: kubeAPIAudit
      kubeAPIAudit:
        # Don't generate audit events for all requests in RequestReceived stage.
        omitStages:
          - "RequestReceived"

        rules:
          # Log pod changes at RequestResponse level
          - level: RequestResponse
            resources:
            - group: ""
              resources: ["pods"]

          # Log "pods/log", "pods/status" at Metadata level
          - level: Metadata
            resources:
            - group: ""
              resources: ["pods/log", "pods/status"]

          # Don't log requests to a configmap called "controller-leader"
          - level: None
            resources:
            - group: ""
              resources: ["configmaps"]
              resourceNames: ["controller-leader"]

          # Don't log watch requests by the "system:kube-proxy" on endpoints or services
          - level: None
            users: ["system:kube-proxy"]
            verbs: ["watch"]
            resources:
            - group: "" # core API group
              resources: ["endpoints", "services"]

          # Don't log authenticated requests to certain non-resource URL paths.
          - level: None
            userGroups: ["system:authenticated"]
            nonResourceURLs:
            - "/api*" # Wildcard matching.
            - "/version"

          # Log the request body of configmap changes in kube-system.
          - level: Request
            resources:
            - group: "" # core API group
              resources: ["configmaps"]
            # This rule only applies to resources in the "kube-system" namespace.
            # The empty string "" can be used to select non-namespaced resources.
            namespaces: ["kube-system"]

          # Log configmap and secret changes in all other namespaces at the Metadata level.
          - level: Metadata
            resources:
            - group: "" # core API group
              resources: ["secrets", "configmaps"]

          # Log all other resources in core and extensions at the Request level.
          - level: Request
            resources:
            - group: "" # core API group
            - group: "extensions" # Version of group should NOT be included.

          # A catch-all rule to log all other requests at the Metadata level.
          - level: Metadata
1 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application.
2 The name of your audit policy.

Forwarding logs to an external Loki logging system

You can forward logs to an external Loki logging system in addition to, or instead of, the default log store.

To configure log forwarding to Loki, you must create a ClusterLogForwarder custom resource (CR) with an output to Loki, and a pipeline that uses the output. The output to Loki can use the HTTP (insecure) or HTTPS (secure HTTP) connection.

Prerequisites
  • You must have a Loki logging system running at the URL you specify with the url field in the CR.

Procedure
  1. Create or edit a YAML file that defines the ClusterLogForwarder CR object:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> (1)
      namespace: <log_forwarder_namespace> (2)
    spec:
      serviceAccountName: <service_account_name> (3)
      outputs:
      - name: loki-insecure (4)
        type: "loki" (5)
        url: http://loki.insecure.com:3100 (6)
        loki:
          tenantKey: kubernetes.namespace_name
          labelKeys:
          - kubernetes.labels.foo
      - name: loki-secure (7)
        type: "loki"
        url: https://loki.secure.com:3100
        secret:
          name: loki-secret (8)
        loki:
          tenantKey: kubernetes.namespace_name (9)
          labelKeys:
          - kubernetes.labels.foo (10)
      pipelines:
      - name: application-logs (11)
        inputRefs:  (12)
        - application
        - audit
        outputRefs: (13)
        - loki-secure
    1 In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2 In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4 Specify a name for the output.
    5 Specify the type as "loki".
    6 Specify the URL and port of the Loki system as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. Loki’s default port for HTTP(S) communication is 3100.
    7 For a secure connection, you can specify an https or http URL that you authenticate by specifying a secret.
    8 For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificates it represents. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in the openshift-logging project. For more information, see the following "Example: Setting a secret that contains a username and password."
    9 Optional: Specify a metadata key field to generate values for the TenantID field in Loki. For example, setting tenantKey: kubernetes.namespace_name uses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section.
    10 Optional: Specify a list of metadata field keys to replace the default Loki labels. Loki label names must match the regular expression [a-zA-Z_:][a-zA-Z0-9_:]*. Illegal characters in metadata keys are replaced with _ to form the label name. For example, the kubernetes.labels.foo metadata key becomes Loki label kubernetes_labels_foo. If you do not set labelKeys, the default value is: [log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host]. Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config. You can still query based on any log record field using query filters.
    11 Optional: Specify a name for the pipeline.
    12 Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    13 Specify the name of the output to use when forwarding logs with this pipeline.

    Because Loki requires log streams to be correctly ordered by timestamp, labelKeys always includes the kubernetes_host label set, even if you do not specify it. This inclusion ensures that each stream originates from a single host, which prevents timestamps from becoming disordered due to clock differences on different hosts.

  2. Apply the ClusterLogForwarder CR object by running the following command:

    $ oc apply -f <filename>.yaml
Additional resources

Forwarding logs to an external Elasticsearch instance

You can forward logs to an external Elasticsearch instance in addition to, or instead of, the internal log store. You are responsible for configuring the external log aggregator to receive log data from Red Hat OpenShift Service on AWS.

To configure log forwarding to an external Elasticsearch instance, you must create a ClusterLogForwarder custom resource (CR) with an output to that instance, and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection.

To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the default output to forward logs to the internal instance.

If you only want to forward logs to an internal Elasticsearch instance, you do not need to create a ClusterLogForwarder CR.

Prerequisites
  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure
  1. Create or edit a YAML file that defines the ClusterLogForwarder CR:

    Example ClusterLogForwarder CR
    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> (1)
      namespace: <log_forwarder_namespace> (2)
    spec:
      serviceAccountName: <service_account_name> (3)
      outputs:
       - name: elasticsearch-example (4)
         type: elasticsearch (5)
         elasticsearch:
           version: 8 (6)
         url: http://elasticsearch.example.com:9200 (7)
         secret:
           name: es-secret (8)
      pipelines:
       - name: application-logs (9)
         inputRefs: (10)
         - application
         - audit
         outputRefs:
         - elasticsearch-example (11)
         - default (12)
         labels:
           myLabel: "myValue" (13)
    # ...
    1 In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2 In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4 Specify a name for the output.
    5 Specify the elasticsearch type.
    6 Specify the Elasticsearch version. This can be 6, 7, or 8.
    7 Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address.
    8 For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in the openshift-logging project. For more information, see the following "Example: Setting a secret that contains a username and password."
    9 Optional: Specify a name for the pipeline.
    10 Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    11 Specify the name of the output to use when forwarding logs with this pipeline.
    12 Optional: Specify the default output to send the logs to the internal Elasticsearch instance.
    13 Optional: String. One or more labels to add to the logs.
  2. Apply the ClusterLogForwarder CR:

    $ oc apply -f <filename>.yaml
Example: Setting a secret that contains a username and password

You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance.

For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password.

  1. Create a Secret YAML file similar to the following example. Use base64-encoded values for the username and password fields. The secret type is opaque by default.

    apiVersion: v1
    kind: Secret
    metadata:
      name: openshift-test-secret
    data:
      username: <username>
      password: <password>
    # ...
  2. Create the secret:

    $ oc create secret -n openshift-logging openshift-test-secret.yaml
  3. Specify the name of the secret in the ClusterLogForwarder CR:

    kind: ClusterLogForwarder
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      outputs:
       - name: elasticsearch
         type: "elasticsearch"
         url: https://elasticsearch.secure.com:9200
         secret:
            name: openshift-test-secret
    # ...

    In the value of the url field, the prefix can be http or https.

  4. Apply the CR object:

    $ oc apply -f <filename>.yaml

Forwarding logs using the Fluentd forward protocol

You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from Red Hat OpenShift Service on AWS.

To configure log forwarding using the forward protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the Fluentd servers, and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection.

Prerequisites
  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure
  1. Create or edit a YAML file that defines the ClusterLogForwarder CR object:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance (1)
      namespace: openshift-logging (2)
    spec:
      outputs:
       - name: fluentd-server-secure (3)
         type: fluentdForward (4)
         url: 'tls://fluentdserver.security.example.com:24224' (5)
         secret: (6)
            name: fluentd-secret
       - name: fluentd-server-insecure
         type: fluentdForward
         url: 'tcp://fluentdserver.home.example.com:24224'
      pipelines:
       - name: forward-to-fluentd-secure (7)
         inputRefs:  (8)
         - application
         - audit
         outputRefs:
         - fluentd-server-secure (9)
         - default (10)
         labels:
           clusterId: "C1234" (11)
       - name: forward-to-fluentd-insecure (12)
         inputRefs:
         - infrastructure
         outputRefs:
         - fluentd-server-insecure
         labels:
           clusterId: "C1234"
    1 The name of the ClusterLogForwarder CR must be instance.
    2 The namespace for the ClusterLogForwarder CR must be openshift-logging.
    3 Specify a name for the output.
    4 Specify the fluentdForward type.
    5 Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    6 If you are using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must contain a ca-bundle.crt key that points to the certificate it represents.
    7 Optional: Specify a name for the pipeline.
    8 Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    9 Specify the name of the output to use when forwarding logs with this pipeline.
    10 Optional: Specify the default output to forward logs to the internal Elasticsearch instance.
    11 Optional: String. One or more labels to add to the logs.
    12 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
    • A name to describe the pipeline.

    • The inputRefs is the log type to forward by using the pipeline: application, infrastructure, or audit.

    • The outputRefs is the name of the output to use.

    • Optional: String. One or more labels to add to the logs.

  2. Create the CR object:

    $ oc create -f <file-name>.yaml

Enabling nanosecond precision for Logstash to ingest data from fluentd

For Logstash to ingest log data from fluentd, you must enable nanosecond precision in the Logstash configuration file.

Procedure
  • In the Logstash configuration file, set nanosecond_precision to true.

Example Logstash configuration file
input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } }
filter { }
output { stdout { codec => rubydebug } }

Forwarding logs using the syslog protocol

You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from Red Hat OpenShift Service on AWS.

To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.

Prerequisites
  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure
  1. Create or edit a YAML file that defines the ClusterLogForwarder CR object:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> (1)
      namespace: <log_forwarder_namespace> (2)
    spec:
      serviceAccountName: <service_account_name> (3)
      outputs:
       - name: rsyslog-east (4)
         type: syslog (5)
         syslog: (6)
           facility: local0
           rfc: RFC3164
           payloadKey: message
           severity: informational
         url: 'tls://rsyslogserver.east.example.com:514' (7)
         secret: (8)
            name: syslog-secret
       - name: rsyslog-west
         type: syslog
         syslog:
          appName: myapp
          facility: user
          msgID: mymsg
          procID: myproc
          rfc: RFC5424
          severity: debug
         url: 'tcp://rsyslogserver.west.example.com:514'
      pipelines:
       - name: syslog-east (9)
         inputRefs: (10)
         - audit
         - application
         outputRefs: (11)
         - rsyslog-east
         - default (12)
         labels:
           secure: "true" (13)
           syslog: "east"
       - name: syslog-west (14)
         inputRefs:
         - infrastructure
         outputRefs:
         - rsyslog-west
         - default
         labels:
           syslog: "west"
    1 In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2 In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4 Specify a name for the output.
    5 Specify the syslog type.
    6 Optional: Specify the syslog parameters, listed below.
    7 Specify the URL and port of the external syslog instance. You can use the udp (insecure), tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    8 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. In legacy implementations, the secret must exist in the openshift-logging project.
    9 Optional: Specify a name for the pipeline.
    10 Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    11 Specify the name of the output to use when forwarding logs with this pipeline.
    12 Optional: Specify the default output to forward logs to the internal Elasticsearch instance.
    13 Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
    14 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
    • A name to describe the pipeline.

    • The inputRefs is the log type to forward by using the pipeline: application, infrastructure, or audit.

    • The outputRefs is the name of the output to use.

    • Optional: String. One or more labels to add to the logs.

  2. Create the CR object:

    $ oc create -f <filename>.yaml

Adding log source information to message output

You can add namespace_name, pod_name, and container_name elements to the message field of the record by adding the AddLogSource field to your ClusterLogForwarder custom resource (CR).

  spec:
    outputs:
    - name: syslogout
      syslog:
        addLogSource: true
        facility: user
        payloadKey: message
        rfc: RFC3164
        severity: debug
        tag: mytag
      type: syslog
      url: tls://syslog-receiver.openshift-logging.svc:24224
    pipelines:
    - inputRefs:
      - application
      name: test-app
      outputRefs:
      - syslogout

This configuration is compatible with both RFC3164 and RFC5424.

Example syslog message output without AddLogSource
<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - -  {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56}
Example syslog message output with AddLogSource
<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - -  namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76}

Syslog parameters

You can configure the following for the syslog outputs. For more information, see the syslog RFC3164 or RFC5424 RFC.

  • facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:

    • 0 or kern for kernel messages

    • 1 or user for user-level messages, the default.

    • 2 or mail for the mail system

    • 3 or daemon for system daemons

    • 4 or auth for security/authentication messages

    • 5 or syslog for messages generated internally by syslogd

    • 6 or lpr for the line printer subsystem

    • 7 or news for the network news subsystem

    • 8 or uucp for the UUCP subsystem

    • 9 or cron for the clock daemon

    • 10 or authpriv for security authentication messages

    • 11 or ftp for the FTP daemon

    • 12 or ntp for the NTP subsystem

    • 13 or security for the syslog audit log

    • 14 or console for the syslog alert log

    • 15 or solaris-cron for the scheduling daemon

    • 1623 or local0local7 for locally used facilities

  • Optional: payloadKey: The record field to use as payload for the syslog message.

    Configuring the payloadKey parameter prevents other parameters from being forwarded to the syslog.

  • rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.

  • severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:

    • 0 or Emergency for messages indicating the system is unusable

    • 1 or Alert for messages indicating action must be taken immediately

    • 2 or Critical for messages indicating critical conditions

    • 3 or Error for messages indicating error conditions

    • 4 or Warning for messages indicating warning conditions

    • 5 or Notice for messages indicating normal but significant conditions

    • 6 or Informational for messages indicating informational messages

    • 7 or Debug for messages indicating debug-level messages, the default

  • tag: Tag specifies a record field to use as a tag on the syslog message.

  • trimPrefix: Remove the specified prefix from the tag.

Additional RFC5424 syslog parameters

The following parameters apply to RFC5424:

  • appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for RFC5424.

  • msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for RFC5424.

  • procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for RFC5424.

Forwarding logs to a Kafka broker

You can forward logs to an external Kafka broker in addition to, or instead of, the default log store.

To configure log forwarding to an external Kafka instance, you must create a ClusterLogForwarder custom resource (CR) with an output to that instance, and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection.

Procedure
  1. Create or edit a YAML file that defines the ClusterLogForwarder CR object:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> (1)
      namespace: <log_forwarder_namespace> (2)
    spec:
      serviceAccountName: <service_account_name> (3)
      outputs:
       - name: app-logs (4)
         type: kafka (5)
         url: tls://kafka.example.devlab.com:9093/app-topic (6)
         secret:
           name: kafka-secret (7)
       - name: infra-logs
         type: kafka
         url: tcp://kafka.devlab2.example.com:9093/infra-topic (8)
       - name: audit-logs
         type: kafka
         url: tls://kafka.qelab.example.com:9093/audit-topic
         secret:
            name: kafka-secret-qe
      pipelines:
       - name: app-topic (9)
         inputRefs: (10)
         - application
         outputRefs: (11)
         - app-logs
         labels:
           logType: "application" (12)
       - name: infra-topic (13)
         inputRefs:
         - infrastructure
         outputRefs:
         - infra-logs
         labels:
           logType: "infra"
       - name: audit-topic
         inputRefs:
         - audit
         outputRefs:
         - audit-logs
         labels:
           logType: "audit"
    1 In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2 In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4 Specify a name for the output.
    5 Specify the kafka type.
    6 Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    7 If you are using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. In legacy implementations, the secret must exist in the openshift-logging project.
    8 Optional: To send an insecure output, use a tcp prefix in front of the URL. Also omit the secret key and its name from this output.
    9 Optional: Specify a name for the pipeline.
    10 Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    11 Specify the name of the output to use when forwarding logs with this pipeline.
    12 Optional: String. One or more labels to add to the logs.
    13 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
    • A name to describe the pipeline.

    • The inputRefs is the log type to forward by using the pipeline: application, infrastructure, or audit.

    • The outputRefs is the name of the output to use.

    • Optional: String. One or more labels to add to the logs.

  2. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example:

    # ...
    spec:
      outputs:
      - name: app-logs
        type: kafka
        secret:
          name: kafka-secret-dev
        kafka:  (1)
          brokers: (2)
            - tls://kafka-broker1.example.com:9093/
            - tls://kafka-broker2.example.com:9093/
          topic: app-topic (3)
    # ...
    1 Specify a kafka key that has a brokers and topic key.
    2 Use the brokers key to specify an array of one or more brokers.
    3 Use the topic key to specify the target topic that receives the logs.
  3. Apply the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml

Forwarding logs to Amazon CloudWatch

You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default log store.

To configure log forwarding to CloudWatch, you must create a ClusterLogForwarder custom resource (CR) with an output for CloudWatch, and a pipeline that uses the output.

Procedure
  1. Create a Secret YAML file that uses the aws_access_key_id and aws_secret_access_key fields to specify your base64-encoded AWS credentials. For example:

    apiVersion: v1
    kind: Secret
    metadata:
      name: cw-secret
      namespace: openshift-logging
    data:
      aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK
      aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=
  2. Create the secret. For example:

    $ oc apply -f cw-secret.yaml
  3. Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the name of the secret. For example:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> (1)
      namespace: <log_forwarder_namespace> (2)
    spec:
      serviceAccountName: <service_account_name> (3)
      outputs:
       - name: cw (4)
         type: cloudwatch (5)
         cloudwatch:
           groupBy: logType (6)
           groupPrefix: <group prefix> (7)
           region: us-east-2 (8)
         secret:
            name: cw-secret (9)
      pipelines:
        - name: infra-logs (10)
          inputRefs: (11)
            - infrastructure
            - audit
            - application
          outputRefs:
            - cw (12)
    1 In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2 In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4 Specify a name for the output.
    5 Specify the cloudwatch type.
    6 Optional: Specify how to group the logs:
    • logType creates log groups for each log type.

    • namespaceName creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs.

    • namespaceUUID creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.

    7 Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups.
    8 Specify the AWS region.
    9 Specify the name of the secret that contains your AWS credentials.
    10 Optional: Specify a name for the pipeline.
    11 Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    12 Specify the name of the output to use when forwarding logs with this pipeline.
  4. Create the CR object:

    $ oc create -f <file-name>.yaml
Example: Using ClusterLogForwarder with Amazon CloudWatch

Here, you see an example ClusterLogForwarder custom resource (CR) and the log data that it outputs to Amazon CloudWatch.

Suppose that you are running a ROSA cluster named mycluster. The following command returns the cluster’s infrastructureName, which you will use to compose aws commands later on:

$ oc get Infrastructure/cluster -ojson | jq .status.infrastructureName
"mycluster-7977k"

To generate log data for this example, you run a busybox pod in a namespace called app. The busybox pod writes a message to stdout every three seconds:

$ oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done'
$ oc logs -f busybox
My life is my message
My life is my message
My life is my message
...

You can look up the UUID of the app namespace where the busybox pod runs:

$ oc get ns/app -ojson | jq .metadata.uid
"794e1e1a-b9f5-4958-a190-e76a9b53d7bf"

In your ClusterLogForwarder custom resource (CR), you configure the infrastructure, audit, and application log types as inputs to the all-logs pipeline. You also connect this pipeline to cw output, which forwards the logs to a CloudWatch instance in the us-east-2 region:

apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
spec:
  outputs:
   - name: cw
     type: cloudwatch
     cloudwatch:
       groupBy: logType
       region: us-east-2
     secret:
        name: cw-secret
  pipelines:
    - name: all-logs
      inputRefs:
        - infrastructure
        - audit
        - application
      outputRefs:
        - cw

Each region in CloudWatch contains three levels of objects:

  • log group

    • log stream

      • log event

With groupBy: logType in the ClusterLogForwarding CR, the three log types in the inputRefs produce three log groups in Amazon Cloudwatch:

$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.application"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"

Each of the log groups contains log streams:

$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName
"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log"
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log"
...
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log"
...

Each log stream contains log events. To see a log event from the busybox Pod, you specify its log stream from the application log group:

$ aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log
{
    "events": [
        {
            "timestamp": 1629422704178,
            "message": "{\"docker\":{\"container_id\":\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\"},\"kubernetes\":{\"container_name\":\"busybox\",\"namespace_name\":\"app\",\"pod_name\":\"busybox\",\"container_image\":\"docker.io/library/busybox:latest\",\"container_image_id\":\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\",\"pod_id\":\"870be234-90a3-4258-b73f-4f4d6e2777c7\",\"host\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"labels\":{\"run\":\"busybox\"},\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\",\"namespace_labels\":{\"kubernetes_io/metadata_name\":\"app\"}},\"message\":\"My life is my message\",\"level\":\"unknown\",\"hostname\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"pipeline_metadata\":{\"collector\":{\"ipaddr4\":\"10.0.216.3\",\"inputname\":\"fluent-plugin-systemd\",\"name\":\"fluentd\",\"received_at\":\"2021-08-20T01:25:08.085760+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-20T01:25:04.178986+00:00\",\"viaq_index_name\":\"app-write\",\"viaq_msg_id\":\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\",\"log_type\":\"application\",\"time\":\"2021-08-20T01:25:04+00:00\"}",
            "ingestionTime": 1629422744016
        },
...
Example: Customizing the prefix in log group names

In the log group names, you can replace the default infrastructureName prefix, mycluster-7977k, with an arbitrary string like demo-group-prefix. To make this change, you update the groupPrefix field in the ClusterLogForwarding CR:

cloudwatch:
    groupBy: logType
    groupPrefix: demo-group-prefix
    region: us-east-2

The value of groupPrefix replaces the default infrastructureName prefix:

$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"demo-group-prefix.application"
"demo-group-prefix.audit"
"demo-group-prefix.infrastructure"
Example: Naming log groups after application namespace names

For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace.

If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before.

If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following "Naming log groups for application namespace UUIDs" section instead.

To create application log groups whose names are based on the names of the application namespaces, you set the value of the groupBy field to namespaceName in the ClusterLogForwarder CR:

cloudwatch:
    groupBy: namespaceName
    region: us-east-2

Setting groupBy to namespaceName affects the application log group only. It does not affect the audit and infrastructure log groups.

In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.app log group instead of mycluster-7977k.application:

$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.app"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"

If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace.

The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups.

Example: Naming log groups after application namespace UUIDs

For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace.

If you delete an application namespace object and create a new one, CloudWatch creates a new log group.

If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding "Example: Naming log groups for application namespace names" section instead.

To name log groups after application namespace UUIDs, you set the value of the groupBy field to namespaceUUID in the ClusterLogForwarder CR:

cloudwatch:
    groupBy: namespaceUUID
    region: us-east-2

In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf log group instead of mycluster-7977k.application:

$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"

The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups.

Creating a secret for AWS CloudWatch with an existing AWS role

If you have an existing role for AWS, you can create a secret for AWS with STS using the oc create secret --from-literal command.

Procedure
  • In the CLI, enter the following to generate a secret for AWS:

    $ oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions
    Example Secret
    apiVersion: v1
    kind: Secret
    metadata:
      namespace: openshift-logging
      name: my-secret-name
    stringData:
      role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions

Forwarding logs to Amazon CloudWatch from STS enabled clusters

For clusters with AWS Security Token Service (STS) enabled, you must create the AWS IAM roles and policies that enable log forwarding, and a ClusterLogForwarder custom resource (CR) with an output for CloudWatch.

Prerequisites
  • Logging for Red Hat OpenShift: 5.5 and later

Procedure
  1. Prepare the AWS account:

    1. Create an IAM policy JSON file with the following content:

      {
      "Version": "2012-10-17",
      "Statement": [
          {
          "Effect": "Allow",
          "Action": [
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:DescribeLogGroups",
            "logs:DescribeLogStreams",
            "logs:PutLogEvents",
            "logs:PutRetentionPolicy"
          ],
          "Resource": "arn:aws:logs:*:*:*"
          }
        ]
      }
    2. Create an IAM trust JSON file with the following content:

      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Federated": "arn:aws:iam::<your_aws_account_id>:oidc-provider/<openshift_oidc_provider>" (1)
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
              "StringEquals": {
                "<openshift_oidc_provider>:sub": "system:serviceaccount:openshift-logging:logcollector" (2)
              }
            }
          }
        ]
      }
      1 Specify your AWS account ID and the OpenShift OIDC provider endpoint. Obtain the endpoint by running the following command:
      $ rosa describe cluster \
        -c $(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}') \
        -o yaml | awk '/oidc_endpoint_url/ {print $2}' | cut -d '/' -f 3,4
      2 Specify the OpenShift OIDC endpoint again.
    3. Create the IAM role:

      $ aws iam create-role
        --role-name “<your_rosa_cluster_name>-RosaCloudWatch\
        --assume-role-policy-document file://<your_trust_file_name>.json \
        --query Role.Arn \
        --output text

      Save the output. You will use it in the next steps.

    4. Create the IAM policy:

      $ aws iam create-policy \
      --policy-name "RosaCloudWatch" \
      --policy-document file:///<your_policy_file_name>.json \
      --query Policy.Arn \
      --output text

      Save the output. You will use it in the next steps.

    5. Attach the IAM policy to the IAM role:

      $ aws iam attach-role-policy \
       --role-name “<your_rosa_cluster_name>-RosaCloudWatch” \
       --policy-arn <policy_ARN> (1)
      1 Replace policy_ARN with the output you saved while creating the policy.
  2. Create a Secret YAML file for the Red Hat OpenShift Logging Operator:

    apiVersion: v1
    kind: Secret
    metadata:
      name: cloudwatch-credentials
      namespace: openshift-logging
    stringData:
      credentials: |-
        [default]
        sts_regional_endpoints = regional
        role_arn: <role_ARN>  (1)
        web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
    1 Replace role_ARN with the output you saved while creating the role.
  3. Create the secret:

    $ oc apply -f cloudwatch-credentials.yaml
  4. Create or edit a ClusterLogForwarder custom resource:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> (1)
      namespace: <log_forwarder_namespace> (2)
    spec:
      serviceAccountName: <service_account_name> (3)
      outputs:
       - name: cw (4)
         type: cloudwatch (5)
         cloudwatch:
           groupBy: logType (6)
           groupPrefix: <group prefix> (7)
           region: us-east-2 (8)
         secret:
            name: <your_secret_name> (9)
      pipelines:
        - name: to-cloudwatch (10)
          inputRefs: (11)
            - infrastructure
            - audit
            - application
          outputRefs:
            - cw (12)
    1 In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2 In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4 Specify a name for the output.
    5 Specify the cloudwatch type.
    6 Optional: Specify how to group the logs:
    • logType creates log groups for each log type

    • namespaceName creates a log group for each application name space. Infrastructure and audit logs are unaffected, remaining grouped by logType.

    • namespaceUUID creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.

    7 Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups.
    8 Specify the AWS region.
    9 Specify the name of the secret you created previously.
    10 Optional: Specify a name for the pipeline.
    11 Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    12 Specify the name of the output to use when forwarding logs with this pipeline.
Additional resources