Platform Logs - are three types of auditable activities: Data Access, Admin Activity, and System Event activity. And thus, there are three types of logs to go along with each of these activities.

Platform Data Access Logs

  • Disabled by default
  • IAM roles roles/resourcemanager.organizationAdmin required to enable at the organizational level
  • Will be enabled at the organization level to ensure all logging is captured
  • Will be enabled in the default configuration setting
  • These logs cannot be disabled without permission.
  • Attempts to disable will be logged in Stackdriver Monitoring

Platform Admin Activity Logs

  • Always enabled by default
  • Cannot be disabled or reconfigured

Platform System Event Logs

  • Generated by Google systems and NOT by end user activity
  • Cannot be disabled or reconfigured
  • Only exist for Google Compute Engine
# Create new logging bucket with analytics enabled
gcloud logging buckets create audit-folder-logs --location=global \
       --enable-analytics \
       --retention-days=90 \
       --project=my-audit-logs

# Optional - link a BigQuery dataset
gcloud logging links create audit_folder_logs --bucket audit-folder-logs \
       --location=global \
       --project=my-audit-logs

The Cloud Logs to Pub Sub Topic to Cloud Function Deadly Combo

Logs can be routed via a sink to a Pub Sub Topic. One can filter the message type to be sent to pub sub. cloud function subscribes to the topic - and can be fired on message reception by the topic. The function can call compute engine api to shut down the instance or any other task.

  • Filter and export Cloud Logging messages to Cloud Pub/Sub
  • Trigger Cloud Functions from Pub/Sub
  • Write a Cloud Function to do simple processing
  • API to update a VM's metadata

Visualize log exports in bigquery and looker studio

  1. Sink from filtered logs (e.g. all cloud functions) to bigquery dataset.
  2. The schema is automatically generated for you.
  3. Now, in bigquery, you will have a 'visualize in looker' option.
  4. From here, you will be able to pick the columns. However, just the column itself will contain the entire payload - look for the 'text payload' or something similar.

Old way of routing logs - Log Router

Sinks control how Cloud Logging routes logs. Using sinks, you can route some or all of your logs to supported destinations. Some of the reasons that you might want to control how your logs are routed include the following:

  • To store logs that are unlikely to be read but that must be retained for compliance purposes.
  • To organize your logs in buckets in a format that is useful to you.
  • To use big-data analysis tools on your logs.
  • To stream your logs to other applications, other repositories, or third parties. For example, if you want to export your logs from Google Cloud so that you can view them on a third-party platform, then configure a sink to route your log entries to Pub/Sub.

Sinks belong to a given Google Cloud resource: Google Cloud projects, billing accounts, folders, and organizations. When the resource receives a log entry, it routes the log entry according to the sinks contained by that resource and, if enabled, any ancestral sinks belonging under the resource hierarchy. The log entry is sent to the destination associated with each matching sink.

Cloud Logging provides two predefined sinks for each Google Cloud project, billing account, folder, and organization: _Required and _Default. All logs that are generated in a resource are automatically processed through these two sinks and then are stored either in the correspondingly named _Required or _Default buckets.