Client-side telemetry: Deploying a Typescript Lambda function with CDK

From writing a Lambda to compiling it and adding it to our stack.

A classy headshot of Graeme wearing cool glasses, looking like a goofball.

Graeme Zinck

Senior software engineer at LVL Wellbeing

A phone shows an error message while a paper plane flies away with a message.

This is the 3rd article in a 5-part series:

Rolling your own client-side telemetry solution using AWS CDK

A step-by-step walkthrough on deploying a client-side telemetry stack using AWS CDK, Lambda, API Gateway, and CloudWatch.

  1. Client-side telemetry: Series overview
  2. Client-side telemetry: Setting up a new CDK project
  3. Client-side telemetry: Deploying a Typescript Lambda function with CDK
  4. Client-side telemetry: Lambda permissions and APIs in CDK
  5. Client-side telemetry: Alarms

It's Lambda time! If you've been following along, so far, we've set up our CDK project and deployed a CloudWatch Log Group and an SNS topic. This article is where things get spicy: we'll be deploying a Typescript Lambda function to our stack in just two commands (hint: it'll look a lot like npm run build && npm run deploy πŸ˜‰).

Writing a Lambda in Typescript

If we wanted to write the lambda in Javascript, it would be pretty easy: we could add a Javascript file to the repository and reference it in the CDK code. However, writing in Javascript means no type safety. Also, without types, you need to track down the ever-elusive AWS documentation for what goes in and out of Lambda functions. Better to use Typescript!

Since AWS Lambda does not support Typescript natively, we need to compile it before importing it into our CDK code.

We'll start by creating our folder structure!

# Folder with the lambda functions we are adding
mkdir lambda

# Folder with the lambda for logging errors
mkdir lambda/error-logger

# Folder with the build scripts
mkdir scripts

Adding an error logger typescript package

We can add a simple error logging typescript package to accept an error, dump it into CloudWatch Logs, and increment a metric in CloudWatch Metrics (which will trigger alarms in prod).

We'll start with the package.json. Note that the main field points to the compiled index.js file instead of a Typescript file. We'll talk about compilation later.

// lambda/error-logger/package.json
{
  "name": "error-logger",
  "description": "Logs errors to CloudWatch",
  "version": "1.0.0",
  "main": "dist/index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "tsc",
    "watch": "tsc -w"
  },
  "keywords": [],
  "dependencies": {
    "@aws-sdk/client-cloudwatch": "^3.577.0",
    "@aws-sdk/client-cloudwatch-logs": "^3.577.0"
  },
  "devDependencies": {
    "@types/aws-lambda": "^8.10.138"
  }
}

To help with code completion, head into the folder and install the dependencies.

cd lambda/error-logger
npm install

Writing an error logger

To get started, we'll create a simple error logger in index.ts. The following code accepts a request, validates the parameters, and parses the JSON body.

// lambda/error-logger/src/index.ts
import { Handler, APIGatewayEvent } from "aws-lambda";

const createResponse = (statusCode: number, message: string) => {
  return {
    statusCode,
    headers: {
      "Content-Type": "text/json",
      "Access-Control-Allow-Origin": "*",
    },
    body: JSON.stringify({ message }),
  };
};

interface ErrorBody {
  severity?: number;
  errorCode?: string;
}

export const handler: Handler<APIGatewayEvent> = async (event) => {
  let errBody: ErrorBody = {};
  try {
    errBody = JSON.parse(event.body || "{}");
  } catch (error) {
    console.log("Error parsing error body", error);
    return createResponse(400, "Bad request: invalid JSON body");
  }

  const severity = Number(errBody.severity) || 0;
  const errorCode = errBody.errorCode;
  if (severity > 5 || severity < 1 || !errorCode) {
    return createResponse(400, "Bad request: missing required parameters");
  }

  try {
    // Log the error
  } catch (error) {
    console.log("Error logging error", error);
    return createResponse(500, "Error logging failed");
  }

  return createResponse(200, "Error logged successfully");
};

Now, we need to log the error to CloudWatch Logs! To do this, we'll create a separate client file for sending data to the service. It's pretty much all boilerplate!

// lambda/error-logger/src/clients/cloudwatchLogsClient.ts
import {
  CloudWatchLogsClient,
  PutLogEventsCommand,
} from "@aws-sdk/client-cloudwatch-logs";

const region = process.env.AWS_REGION;
const client = new CloudWatchLogsClient({ region });

interface LogErrorProps {
  body: object;
  logGroupName?: string;
  logStreamName?: string;
}

export const logError = async ({
  body,
  logGroupName,
  logStreamName,
}: LogErrorProps) => {
  const logData = new PutLogEventsCommand({
    logGroupName,
    logStreamName,
    logEvents: [
      {
        message: JSON.stringify(body),
        timestamp: Date.now(),
      },
    ],
  });

  await client.send(logData);
};

Now, we can go back to index.ts and use the client!

We're pulling in a few environment variables that will determine where the logs go inside CloudWatch. We'll be setting those up in CDK shortly!

// lambda/error-logger/src/index.ts
// import { Handler, APIGatewayEvent } from 'aws-lambda';
import { logError } from './clients/cloudwatchLogsClient';

const logGroupName = process.env.LOG_GROUP_NAME;
const logStreamName = process.env.LOG_STREAM_NAME;

// ...

// export const handler: Handler<APIGatewayEvent> = async (event) => {
//   ...
//   try {
       await logError({
       	body: { ...errBody, errorCode, severity },
       	logGroupName,
       	logStreamName,
       });
//   } catch (error) {
//     return createResponse(500, 'Error logging failed');
//   }
//   ...
// }

Unfortunately, CloudWatch Logs doesn't support alarms as of this writing. So, we need another step to trigger alarms. The easiest way is using CloudWatch Metrics. Metrics also make it really easy to make graphs illustrating what types of errors are occurring.

Essentially, we want to tell CloudWatch Metrics "Hey, I'm logging an error in the staging environment with severity 5 and error code ERR_UNCAUGHT! Please add it to some pretty graphs and trigger alarms as needed."

The logistics are a little gross: we need to set up a MetricDatum with some attributes for the attributes we're following. In the code below, we actually emit two MetricDatums:

  1. The first does not have the error type. This allows us to easily alarm on the count of errors (of any error type).
  2. The second does have the error type. This is useful to create graphs that show the count of errors of a specific type.

If we wanted to create alarms using only the second metric, we would have to limit the number of possible error codes, since each error code would require a separate alarm. That's a lot of alarms (and work, and possibly cost), hence why we'll be using the first metric for alarms!

// lambda/error-logger/clients/cloudwatchClient.ts
import {
  CloudWatchClient,
  MetricDatum,
  PutMetricDataCommand,
} from "@aws-sdk/client-cloudwatch";

const region = process.env.AWS_REGION;
const client = new CloudWatchClient({ region });

interface LogErrorMetricProps {
  errorCode: string;
  severity: string;
  metricNamespace?: string;
  environment?: string;
}

export const logErrorMetric = async ({
  errorCode,
  severity,
  metricNamespace,
  environment,
}: LogErrorMetricProps) => {
  const severityMetric: MetricDatum = {
    MetricName: "client.error",
    Dimensions: [
      {
        Name: "error.severity",
        Value: severity,
      },
      {
        Name: "environment",
        Value: environment,
      },
    ],
    Unit: "Count",
    Value: 1,
  };

  // Send two metrics: one with the error code and one without
  // This is required to alarm on the severity level for all errors,
  // while providing some extra data on the CloudWatch console for
  // specific errors
  const metricData = new PutMetricDataCommand({
    MetricData: [
      severityMetric,
      {
        ...severityMetric,
        Dimensions: [
          ...(severityMetric.Dimensions || []),
          {
            Name: "error.type",
            Value: errorCode,
          },
        ],
      },
    ],
    Namespace: `${metricNamespace}/Errors`,
  });
  await client.send(metricData);
};

We're ready to add this to our index.ts file! Once again, there are a few more environment variables I'm pulling inβ€”we'll set those up soon.

// lambda/error-logger/src/index.ts
// import { Handler, APIGatewayEvent } from 'aws-lambda';
// import { logError } from './clients/cloudwatchLogsClient';
import { logErrorMetric } from './clients/cloudwatchClient';

// const logGroupName = process.env.LOG_GROUP_NAME;
// const logStreamName = process.env.LOG_STREAM_NAME;
const environment = process.env.ENVIRONMENT;
const metricNamespace = process.env.ERROR_METRIC_NAMESPACE;

// ...

// export const handler: Handler<APIGatewayEvent> = async (event) => {
//   ...
//   try {
//     await logError({
//     	body: { ...errBody, errorCode, severity },
//     	logGroupName,
//     	logStreamName,
//     });
       await logErrorMetric({
         errorCode,
         severity: severity.toString(),
         metricNamespace,
         environment,
       });
//   } catch (error) {
//     return createResponse(500, 'Error logging failed');
//   }
//   ...
// }

We've built a fully-functional Lambda! All that's left is to compile it and add it to our stack.

Compiling the Lambda

First, we need to set up a tsconfig.json file to put all the compiled code in one place.

// lambda/error-logger/tsconfig.json
{
  "extends": "../../tsconfig.json",
  "compilerOptions": {
    "outDir": "./dist"
  },
  "include": ["src/**/*"]
}

To avoid checking in compiled code, add the following to the .gitignore:

# .gitignore
dist/

We can now compile the lambda with this command:

cd lambda/error-logger
npm run build

This will create a dist folder with the compiled index.js file, which is already referenced in the package.json.

Compiling when you have multiple lambdas in one repository

For now, we're only adding one lambda to our stack (the error-logger). However, we could easily add more lambdas! For instance, we might add a lambda to record the latency for each client-side request, or the time to paint the screen. These metrics would let us monitor performance and alert us to potential issues.

Ideally, we should be able to compile all the lambdas in one go. We can do this easily with a script!

# scripts/compile-lambdas.sh
#!/bin/sh

shopt -s nullglob

for dir in lambda/*/; do
    (
        cd "$dir" || exit 1
        PACKAGE_NAME=$(npm pkg get name | tr -d '"')
        echo "Installing packages for '$PACKAGE_NAME'"

        if ! npm install; then
            echo "Install failed for '$PACKAGE_NAME'"
            exit 1
        fi

        if ! npm run build; then
            echo "Build failed for '$PACKAGE_NAME'"
            exit 1
        fi
    )
    # If any subshell exits with error, stop the script
    [ $? -eq 0 ] || exit 1
done

After adding the script, make sure to make it executable:

chmod u+x scripts/compile-lambdas.sh

Now, update the root package.json to add the build script:

// package.json
{
  // ...
  "scripts": {
    "build": "scripts/compile-lambdas.sh"
    // ...
  }
  // ...
}

And now it's time to compile! Head back to the root of the repository and run:

npm run build

Deploying the Lambda

Now that we have a compiled lambda, we can add it to our stack for deployment.

Here, we're going to use a CDK "construct" to create our lambda. Essentially, a construct is a building block in a stack, keeping our resources organized and reusable. Constructs must always inherit from the AWS Construct class. There are some really handy reasons why, but for now, just take my word for it! πŸ˜‰

We'll start by creating Api.ts, which will be responsible for creating our lambda function.

Most of the code is boilerplate, but there are a few critical pieces:

  1. We add a lambda.Function() construct to create our lambda function.
  2. We pass the path to our compiled lambda function into the code parameter.
  3. We pass in the environment variables required by the lambda function.
// lib/constructs/Api.ts
import { Construct } from "constructs";
import * as lambda from "aws-cdk-lib/aws-lambda";
import * as apigateway from "aws-cdk-lib/aws-apigateway";

interface ApiProps {
  environment: string;
  serviceName: string;
  metricNamespace: string;
  logGroupName: string;
  logStreamName: string;
}

export class Api extends Construct {
  public readonly errorLoggerFn: lambda.IFunction;

  constructor(scope: Construct, id: string, props: ApiProps) {
    super(scope, id);

    // Spoiler alert: we're missing a few things here...

    this.errorLoggerFn = new lambda.Function(this, "ErrorLoggerFunction", {
      runtime: lambda.Runtime.NODEJS_20_X,
      handler: "dist/index.handler",
      code: lambda.Code.fromAsset("lambda/error-logger"),
      environment: {
        ENVIRONMENT: props.environment,
        LOG_GROUP_NAME: props.logGroupName,
        ERROR_METRIC_NAMESPACE: props.metricNamespace,
        LOG_STREAM_NAME: props.logStreamName,
      },
    });
  }
}

Before the lambda is officially part of the stack, we need to add it to the infra-fe-telemetry-stack.ts!

// lib/stacks/infra-fe-telemetry-stack.ts
// ...
export interface InfraFETelemetryStackProps extends cdk.StackProps {
  // ...
  /**
   * The namespace for errors. Typically the name of the service that _sends_ the errors.
   *
   * This is probably going to be different from the service name, which is the name of the
   * service that _receives_ the errors.
   */
  readonly metricNamespace?: string;
}

export class InfraFETelemetryStack extends cdk.Stack {
// constructor(
//   scope: Construct,
//   id: string,
//   {
//    domainName,
//    subdomain = 'fe-telemetry',
//    environment = 'production',
//    serviceName = 'fe-telemetry',
      metricNamespace = 'my-app',
//    ...props
//   }: InfraFETelemetryStackProps
// ) {
//    ...
      new Api(this, 'Api', {
       environment,
       serviceName,
       metricNamespace,
       logGroupName,
       logStreamName: LOG_STREAM_NAME,
      });
// }
}

You'll probably want to define the metricNamespace in your bin/infra-fe-telemetry.ts file. It should be the name of the service that sends the errors.

With that out of the way, we can deploy the stack and see the changes in CloudFormation:

npm run build
npx cdk deploy

Done! We just set up a Lambda in Typescript, compiled it, and added it to our stack.

However, we're still missing some critical pieces. How do requests from the internet get to our lambda function? Does the lambda function even have permissions to send logs to CloudWatch? (Spoiler: it doesn't!) And how can we set up alarms so we can monitor our errors?

We'll be diving into those topics in the next two articles. πŸš€