Open11

AWS CDK + ecspresso で Laravel App をデプロイするまで

shogoggshogogg

やること

  • Laravel をセットアップする
  • セットアップした Laravel アプリケーションを Docker Compose を使ってローカルで起動・表示できるようにする
  • AWS CDK + ecspresso を使って Laravel アプリケーションを AWS ECS 環境にデプロイする

やらないこと

  • Laravel アプリケーションの作り込み
shogoggshogogg

何はともあれ composer create-project コマンドを叩いて Laravel アプリケーションをセットアップする。

$ composer create-project laravel/laravel aws-cdk-ecspresso-laravel-example-2024

セットアップが終わったら作成されたディレクトリに移動し、とりあえず php artisan serve してみる。

$ cd aws-cdk-ecspresso-laravel-example-2024/
$ php artisan serve

   INFO  Server running on [http://127.0.0.1:8000].

  Press Ctrl+C to stop the server

表示された URL にアクセスすると、Laravel の初期ページが表示される。最近のはかっこいいなぁ。

Laravel セットアップ直後

shogoggshogogg

ここから以下を準備して php artisan serve ではなく Docker Compose を使ってローカル開発環境が動くようにする。

  • nginx のコンテナイメージ(Dockerfile
  • php-fpm のコンテナイメージ(Dockerfile
  • docker-compose.yaml
shogoggshogogg

まずは nginx のコンテナイメージを用意する。プロジェクトルートに infra ディレクトリを新たに作成し、以下のファイルを作成していく。

infra/docker/nginx
├── Dockerfile
└── assets
    ├── app.conf.template
    ├── nginx.conf
    └── nginx.gzip.conf

まずは Dockerfile から。nginx のイメージをそのまま使ってもいいんだけど unprivileged かつ distroless なイメージを使いたいので以下のようなカンジに。設定ファイル類も独自に用意するのでベースイメージには含まれないようにしている。

infra/docker/nginx/Dockerfile
#
# nginx-resources
#
FROM nginxinc/nginx-unprivileged:1.27-bookworm AS nginx-resources

ARG TIMEZONE="Asia/Tokyo"

USER root

RUN mkdir -p /opt/var/cache/nginx && \
    cp -a --parents /usr/lib/nginx /opt && \
    cp -a --parents /usr/share/nginx /opt && \
    cp -a --parents /var/log/nginx /opt && \
    cp -aL --parents /var/run /opt && \
    cp -a --parents /etc/nginx /opt && \
    cp -a --parents /etc/passwd /opt && \
    cp -a --parents /etc/group /opt && \
    cp -a --parents /usr/sbin/nginx /opt && \
    cp -a --parents /usr/sbin/nginx-debug /opt && \
    cp -a --parents /lib/$(uname -m)-linux-gnu/ld-* /opt && \
    cp -a --parents /lib/$(uname -m)-linux-gnu/libz.so.* /opt && \
    cp -a --parents /lib/$(uname -m)-linux-gnu/libc* /opt && \
    cp -a --parents /lib/$(uname -m)-linux-gnu/libdl* /opt && \
    cp -a --parents /lib/$(uname -m)-linux-gnu/libpthread* /opt && \
    cp -a --parents /lib/$(uname -m)-linux-gnu/libcrypt* /opt && \
    cp -a --parents /usr/lib/$(uname -m)-linux-gnu/libssl.so.* /opt && \
    cp -a --parents /usr/lib/$(uname -m)-linux-gnu/libcrypto.so.* /opt && \
    cp -a --parents /usr/lib/$(uname -m)-linux-gnu/libpcre2-8.so.* /opt && \
    cp -a /etc/passwd /opt/etc/passwd && \
    cp -a /etc/group /opt/etc/group && \
    cp -a /usr/share/zoneinfo/${TIMEZONE} /opt/etc/localtime && \
    rm -rf /opt/etc/nginx/*.conf && \
    rm -rf /opt/etc/nginx/conf.d/*.conf

#
# nginx-config
#
FROM nginx-resources AS nginx-config

ARG APP_DOMAIN_NAME="localhost"

COPY infra/docker/nginx/assets/app.conf.template /tmp/

RUN echo "$APP_DOMAIN_NAME" && \
    envsubst '$APP_DOMAIN_NAME' < /tmp/app.conf.template > /opt/etc/nginx/conf.d/app.conf

#
# nginx-base
#
FROM gcr.io/distroless/base-debian12:nonroot AS nginx-base

COPY --from=nginx-resources /opt /
COPY infra/docker/nginx/assets/*.conf /etc/nginx/

USER www-data

EXPOSE 8080 8443

CMD ["nginx"]

#
# nginx-dev
#
FROM nginx-base AS nginx-dev

COPY --from=nginx-config /opt/etc/nginx/conf.d/ /etc/nginx/conf.d

続いて nginx の設定ファイルを用意する。infra/docker/nginx/assets 以下に nginx.conf を以下の通り作成。

infra/docker/nginx/assets/nginx.conf
daemon off;
worker_processes auto;

error_log /dev/stderr notice;
pid /tmp/nginx.pid;

events {
  worker_connections 1024;
}

http {
  proxy_temp_path /tmp/proxy_temp;
  client_body_temp_path /tmp/client_temp;
  fastcgi_temp_path /tmp/fastcgi_temp;
  uwsgi_temp_path /tmp/uwsgi_temp;
  scgi_temp_path /tmp/scgi_temp;

  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';

  access_log /dev/stdout main;

  sendfile on;
  keepalive_timeout 65;

  include /etc/nginx/nginx.gzip.conf;
  include /etc/nginx/conf.d/*.conf;
}

続いて gzip 関連の設定を記述する nginx.gzip.conf を以下の通り作成する。

infra/docker/nginx/assets/nginx-base/nginx.gzip.conf
gzip on;
gzip_buffers 16 8k;
gzip_comp_level 6;
gzip_disable "msie6" "Mozilla/4";
gzip_http_version 1.0;
gzip_min_length 2048;
gzip_proxied any;
gzip_static always;
gzip_vary on;
gzip_types
  text/css
  text/plain
  text/javascript
  application/javascript
  application/json
  application/x-javascript
  application/xml
  application/xml+rss
  application/xhtml+xml
  application/x-font-ttf
  application/x-font-opentype
  application/vnd.ms-fontobject
  image/svg+xml
  image/x-icon
  application/rss+xml
  application/atom_xml;

最後に設定ファイル(のテンプレート) app.conf.template を用意する。

infra/docker/nginx/assets/conf.d/app.conf.template
server {
  listen 8080;
  server_name $APP_DOMAIN_NAME;

  charset utf-8;
  index index.php;
  root /app/public;

  add_header Referrer-Policy no-referrer always;
  add_header Strict-Transport-Security 'max-age=63072000; includeSubDomains; preload';
  add_header X-Content-Type-Options nosniff;
  add_header X-Frame-Options SAMEORIGIN;
  add_header X-XSS-Protection "1; mode=block";

  error_page 404 /index.php;

  location / {
    try_files $uri $uri/ /index.php?$query_string;
  }
  location = /favicon.ico {
    access_log off;
    log_not_found off;
  }
  location = /robots.txt {
    access_log off;
    log_not_found off;
  }
  location ~ \.php$ {
      fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
      fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
      include fastcgi_params;
  }
  location ~ /\.(?!well-known).* {
      deny all;
  }
}

nginx 関連の準備はこれで整った。

shogoggshogogg

続いて php-fpm のコンテナイメージを用意する。ファイル構成はこんなカンジ。

infra/docker/app
├── Dockerfile
└── assets
    ├── app-base
    │   └── app.ini
    ├── app-cli-base
    │   └── app-cli.ini
    ├── app-cli-dev
    │   └── app-cli-dev.ini
    ├── app-server-base
    │   ├── app-server.ini
    │   └── www.conf
    └── app-server-dev
        └── app-server-dev.ini

まずは Dockerfile から。

infra/docker/app/Dockerfile
#
# app-base
#
FROM php:8.3.12-fpm-bookworm AS app-base

ARG TIMEZONE="Asia/Tokyo"

ENV TZ=${TIMEZONE} \
    LANG=ja_JP.UTF-8 \
    LANGUAGE=ja_JP:jp \
    LC_ALL=ja_JP.UTF-8

RUN set -eux; \
    apt-get update; \
    ### Install Runtime Dependencies
    apt-get install -y --no-install-recommends \
        libfreetype6 \
        libjpeg62-turbo \
        libpng16-16 \
        libzip4 \
        locales \
        zlib1g \
        ; \
    ### Install Build Dependencies
    savedAptMark="$(apt-mark showmanual)"; \
    apt-get install -y --no-install-recommends \
        libfreetype6-dev \
        libjpeg-dev \
        libpng-dev \
        libzip-dev \
        zlib1g-dev \
        ; \
    ### Install PHP Extensions
    docker-php-ext-configure gd --with-freetype --with-jpeg; \
    docker-php-ext-install -j$(nproc) \
        gd \
        opcache \
        pcntl \
        pdo_mysql \
        zip \
        ; \
    yes "" | pecl install apcu; docker-php-ext-enable apcu; \
    ### Setup Locales
    locale-gen ja_JP.UTF-8; \
    localedef -f UTF-8 -i ja_JP ja_JP; \
    ### Cleanup
    apt-mark auto '.*' > /dev/null; \
    [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; \
    apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
    apt-get clean; \
    rm -rf /var/lib/apt/lists/*; \
    rm -rf /usr/local/etc/php-fpm.d/zz-docker.conf

WORKDIR /app

#
# app-cli-base
#
FROM app-base AS app-cli-base

COPY infra/docker/app/assets/app-cli-base/app-cli.ini /usr/local/etc/php/conf.d/

ENTRYPOINT docker-php-entrypoint

CMD ["php"]

#
# app-server-base
#
FROM app-base AS app-server-base

COPY infra/docker/app/assets/app-server-base/app-server.ini /usr/local/etc/php/conf.d/
COPY infra/docker/app/assets/app-server-base/www.conf /usr/local/etc/php-fpm.d/

VOLUME /var/run/php-fpm

CMD ["php-fpm"]

#
# app-cli-dev
#
FROM app-cli-base AS app-cli-dev

COPY infra/docker/app/assets/app-cli-dev/app-cli-dev.ini /usr/local/etc/php/conf.d/

RUN set -eux; yes "" | pecl install xdebug; docker-php-ext-enable xdebug

#
# app-server-dev
#
FROM app-server-base AS app-server-dev

COPY infra/docker/app/assets/app-server-dev/app-server-dev.ini /usr/local/etc/php/conf.d/

RUN set -eux; yes "" | pecl install xdebug; docker-php-ext-enable xdebug

続いてすべてのイメージにおける共通の設定を記述する app.ini を追加する。

infra/docker/app/assets/app-base/app.ini
expose_php = Off
max_file_uploads = 100
post_max_size = 20M

[date]
date.timezone = Asia/Tokyo

続いて CLI 向けの設定を記述する app-cli.ini を追加する。

infra/docker/app/assets/app-cli-base/app-cli.ini
memory_limit = 2G

続いて開発環境の CLI 向けに Xdebug の設定を記述する app-cli-dev.ini を追加する。

infra/docker/app/assets/app-cli-dev/app-cli-dev.ini
[xdebug]
xdebug.mode = debug
xdebug.start_with_request = yes
xdebug.client_host = host.docker.internal
xdebug.client_port = 9000
xdebug.idekey = aws-cdk-ecspresso-laravel-example-2024

サーバー向けには PHP の設定ファイルである app-server.ini と PHP-FPM の設定ファイルとなる www.conf を追加する。

infra/docker/app/assets/app-server-base/app-server.ini
memory_limit = 512M
infra/docker/app/assets/app-server-base/www.conf
[global]
daemonize = no
error_log = /proc/self/fd/2
events.mechanism = epoll

[www]
listen = /var/run/php-fpm/php-fpm.sock
listen.owner = www-data
listen.group = www-data
listen.mode = 0660

access.log = /dev/null
clear_env = no
catch_workers_output = yes
decorate_workers_output = no

user = www-data
group = www-data

pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.process_idle_timeout = 10s
pm.max_requests = 0
pm.status_path = /status

最後に開発環境のサーバー向け設定ファイルである app-server-dev.ini を追加。CLI 同様 Xdebug の設定を記述する。

infra/docker/app/assets/app-server-dev/app-server-dev.ini
[xdebug]
xdebug.mode = debug
xdebug.start_with_request = yes
xdebug.client_host = host.docker.internal
xdebug.client_port = 9000
xdebug.idekey = aws-cdk-ecspresso-laravel-example-2024

これで php-fpm の準備も完了。

shogoggshogogg

docker-compose.yaml を作成し、nginxphp-fpmdocker compose で起動できるようにする。

docker-compose.yaml
---
volumes:
  app-data:

services:
  app:
    build:
      context: .
      dockerfile: infra/docker/app/Dockerfile
      target: app-server-dev
    image: aws-cdk-ecspresso-laravel-example-2024/app-server-dev:latest
    restart: always
    volumes:
      - .:/app
      - app-data:/var/run/php-fpm
  nginx:
    build:
      context: .
      dockerfile: infra/docker/nginx/Dockerfile
      target: nginx-dev
    image: aws-cdk-ecspresso-laravel-example-2024/nginx-dev:latest
    restart: always
    ports:
      - "80:8080"
    volumes:
      - .:/app
      - app-data:/var/run/php-fpm

ポイントは2点。

  • プロジェクトのルートディレクトリを双方の /app にマウントして開発環境のファイルにアクセスできるようにする
  • app-data というボリュームを共有し、PHP-FPM の UNIX ソケットを共有することで nginxphp-fpm が TCP/IP ではなく UNIX ソケット経由で通信できるようにする

最後に .env ファイルを用意する。

infra/aws/.env
APP_AWS_ACCOUNT=************
APP_AWS_REGION=ap-northeast-1

APP_STACK_NAME=ExampleLaravelAppStack
APP_HOSTED_ZONE_NAME=example.com
APP_DOMAIN_NAME=app.example.com
APP_LOG_BUCKET_NAME=example-laravel-app-alb-log-storage

APP_ECS_CLUSTER_NAME=example-laravel-app-cluster
APP_ECS_SERVICE_NAME=example-laravel-app-service

.env を読み込みつつ docker compose を実行したいので Task を使う。Taskfile.yaml を次の通り作成。

Taskfile.yaml
version: 3

dotenv:
  - .env
  - infra/aws/.env

tasks:
  #
  # Docker tasks
  #
  docker:compose:build:
    cmds:
      - docker compose build
  docker:compose:up:
    cmds:
      - docker compose up -d
  docker:compose:down:
    cmds:
      - docker compose down

ここまで来たら task docker:compose:up を実行することでコンテナが立ち上がり、ブラウザから http://localhost にアクセスすると Laravel の初期画面が表示されるはずだ。

shogoggshogogg

続いて AWS CDK を使ってインフラの準備を行う。まずは AWS CDK プロジェクトを追加する。やり方は色々あると思うけど今回は npm の workspace 機能を使って、プロジェクトルート以下に AWS CDK のプロジェクトを追加する方針を採る。

最初に infra/aws ディレクトリを作成し、そこで cdk init を実行する。

$ mkdir infra/aws
$ cd infra/aws
$ npx cdk init --language typescript
(略)

cdk init のタイミングで npm install が行われ infra/aws/node_modules ディレクトリが作成されるが、これを削除し、npm のワークスペース機能を使うようにしていく。まずはルートディレクトリにある package.json に以下の行を追加する。

package.json
     "type": "module",
+    "workspaces": ["infra/aws"],
     "scripts": {

続いて、infra/aws/node_modules を削除した上でルートディレクトリ上で改めて npm install を実行する。

$ cd ../../
$ rm -rf infra/aws/node_modules
$ npm install

次に、前回同様、vitestbiome, rimraf などを導入していく。

$ npm uninstall jest ts-jest @types/jest -w infra/aws
$ npm install --save-dev husky lint-staged rimraf vitest
$ npm install --save-dev --save-exact @biomejs/biome
$ npm install --save dotenv -w infra/aws
$ npx husky init
$ npx biome init --jsonc

諸々の設定は前回を参考に行っていく。

.husky/pre-commit
npx lint-staged
biome.jsonc
{
  "$schema": "https://biomejs.dev/schemas/1.9.3/schema.json",
  "vcs": {
    // Biome の VCS 統合を有効にする
    "enabled": true,
    "clientKind": "git",
    // .gitignore に記述されたファイルを無視する
    "useIgnoreFile": true
  },
  "files": {
    "include": ["infra/aws/bin/**/*.ts", "infra/aws/lib/**/*.ts", "infra/aws/test/**/*.ts", "*.json", "*.jsonc"],
    "ignore": ["**/*.d.ts"]
  },
  "linter": {
    "enabled": true,
    "rules": {
      "recommended": true
    }
  },
  "formatter": {
    "enabled": true,
    "indentStyle": "space",
    "indentWidth": 2,
    "lineWidth": 120
  },
  "organizeImports": {
    "enabled": true
  },
  "javascript": {
    "formatter": {
      "quoteStyle": "single",
      "semicolons": "asNeeded"
    }
  }
}
infra/aws/package.json
   "scripts": {
     "build": "tsc",
-    "watch": "tsc -w",
-    "test": "jest",
-    "cdk": "cdk"
+    "cdk": "cdk",
+    "clean": "rimraf {bin,lib}/*.{d.ts,js}",
+    "test": "vitest run",
+    "test:watch": "vitest watch",
+    "watch": "tsc -w"
   },
package.json
   "scripts": {
-    "dev": "vite",
     "build": "vite build",
+    "check": "biome check --write",
+    "check:dry-run": "biome check",
+    "dev": "vite",
     "prepare": "husky"
   },

これで AWS CDK を扱う準備は完了。

shogoggshogogg

AWS CDK を用いてスタックの定義を行う。ポイントは次の通り。

  • ECS のタスク/サービスは ecspresso を使ってデプロイするため AWS CDK では ECS クラスタの定義までとする。
  • 煩雑な ALB 周りを自作の Construct にまとめる。
infra/aws/lib/constructs/application-load-balancer.ts
import * as cdk from 'aws-cdk-lib'
import * as certificationManager from 'aws-cdk-lib/aws-certificatemanager'
import * as ec2 from 'aws-cdk-lib/aws-ec2'
import * as elbv2 from 'aws-cdk-lib/aws-elasticloadbalancingv2'
import * as route53 from 'aws-cdk-lib/aws-route53'
import * as targets from 'aws-cdk-lib/aws-route53-targets'
import type * as s3 from 'aws-cdk-lib/aws-s3'
import { Construct } from 'constructs'

const HTTP = 80
const HTTPS = 443

export interface ApplicationLoadBalancerProps {
  domainName: string
  hostedZone: route53.IHostedZone
  logBucket: s3.IBucket
  vpc: ec2.IVpc
}

export class ApplicationLoadBalancer extends Construct {
  public readonly targetGroupArn: string
  public readonly securityGroup: ec2.ISecurityGroup

  constructor(scope: Construct, id: string, props: ApplicationLoadBalancerProps) {
    super(scope, id)
    const { domainName, hostedZone, vpc } = props

    // ACM: Certificate
    const certificate = new certificationManager.Certificate(this, 'Certificate', {
      domainName,
      validation: certificationManager.CertificateValidation.fromDns(hostedZone),
    })

    // Security Group
    const securityGroup = new ec2.SecurityGroup(this, 'SecurityGroup', { vpc })
    securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(HTTP))
    securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(HTTPS))

    // Application Load Balancer
    const alb = new elbv2.ApplicationLoadBalancer(this, 'Alb', {
      internetFacing: true,
      securityGroup,
      vpc,
    })
    alb.logAccessLogs(props.logBucket)

    // ALB: Target Group
    const targetGroup = new elbv2.ApplicationTargetGroup(this, 'TargetGroup', {
      port: HTTP,
      stickinessCookieDuration: cdk.Duration.days(1),
      targetType: elbv2.TargetType.IP,
      vpc,
    })

    // ALB: HTTPS Listener
    alb.addListener('HttpsListener', {
      certificates: [certificate],
      defaultTargetGroups: [targetGroup],
      port: HTTPS,
      protocol: elbv2.ApplicationProtocol.HTTPS,
    })

    // ALB: HTTP Listener
    const httpListener = alb.addListener('HttpListener', {
      defaultTargetGroups: [targetGroup],
      port: HTTP,
      protocol: elbv2.ApplicationProtocol.HTTP,
    })

    // ALB: HTTP Listener Rule - Redirect to HTTPS
    new elbv2.ApplicationListenerRule(this, 'HttpListenerRule', {
      action: elbv2.ListenerAction.redirect({
        permanent: true,
        port: HTTPS.toString(),
        protocol: elbv2.ApplicationProtocol.HTTPS,
      }),
      conditions: [elbv2.ListenerCondition.pathPatterns(['*'])],
      listener: httpListener,
      priority: 1,
    })

    // Route 53: A/AAAA Record to ALB
    const recordProps: route53.ARecordProps & route53.AaaaRecordProps = {
      recordName: props.domainName,
      target: route53.RecordTarget.fromAlias(new targets.LoadBalancerTarget(alb)),
      zone: hostedZone,
    }
    new route53.ARecord(this, 'ARecord', recordProps)
    new route53.AaaaRecord(this, 'AaaaRecord', recordProps)

    // Assign to class properties
    this.targetGroupArn = targetGroup.targetGroupArn
    this.securityGroup = securityGroup
  }
}

そして用意した Construct を用いて次の通り LaravelAppStack を定義する。参考にした記事では ecspresso に ARN 等を渡すために SSM Parameter を使っているが、Cloudformation には Output という機能があるため、それを使うようにした。

infra/aws/lib/laravel-app-stack.ts
import * as cdk from 'aws-cdk-lib'
import * as ec2 from 'aws-cdk-lib/aws-ec2'
import * as ecr from 'aws-cdk-lib/aws-ecr'
import * as ecs from 'aws-cdk-lib/aws-ecs'
import * as iam from 'aws-cdk-lib/aws-iam'
import * as logs from 'aws-cdk-lib/aws-logs'
import * as route53 from 'aws-cdk-lib/aws-route53'
import * as s3 from 'aws-cdk-lib/aws-s3'
import type { Construct } from 'constructs'
import { ApplicationLoadBalancer } from './constructs/application-load-balancer'

interface LaravelAppStackProps extends cdk.StackProps {
  domainName: string
  hostedZoneName: string
  ecr: {
    repositories: Array<{
      id: string
      repositoryName: string
    }>
  }
  ecs: {
    clusterName: string
    serviceName: string
  }
  s3: {
    logBucketName: string
  }
  vpc: {
    cidr: string
  }
}

export class LaravelAppStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props: LaravelAppStackProps) {
    super(scope, id, props)

    // Route 53: Hosted Zone
    const hostedZone = route53.HostedZone.fromLookup(this, 'HostedZone', {
      domainName: props.hostedZoneName,
    })

    // S3: Bucket for ALB access logs
    const logBucket = new s3.Bucket(this, 'LogBucket', {
      bucketName: props.s3.logBucketName,
      lifecycleRules: [{ expiration: cdk.Duration.days(365) }],
    })

    // VPC
    const vpc = new ec2.Vpc(this, 'Vpc', {
      ipAddresses: ec2.IpAddresses.cidr(props.vpc.cidr),
      maxAzs: 2,
      natGateways: 1,
      subnetConfiguration: [
        {
          cidrMask: 24,
          name: 'Public',
          subnetType: ec2.SubnetType.PUBLIC,
        },
        {
          cidrMask: 24,
          name: 'Private',
          subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
        },
      ],
    })
    const vpcSubnetIds = vpc.privateSubnets.map(subnet => subnet.subnetId)
    if (vpcSubnetIds.length !== 2) {
      throw new Error('Unexpected number of private subnets')
    }

    // ALB
    const alb = new ApplicationLoadBalancer(this, 'Alb', {
      domainName: props.domainName,
      hostedZone,
      logBucket,
      vpc,
    })

    // ECR: Repositories
    for (const { id, repositoryName } of props.ecr.repositories) {
      new ecr.Repository(this, `Ecr${id}`, {
        repositoryName,
        lifecycleRules: [
          {
            description: 'hold 10 images',
            maxImageCount: 10,
          },
        ],
      })
    }

    // ECS: Cluster
    new ecs.Cluster(this, 'EcsCluster', {
      clusterName: props.ecs.clusterName,
      vpc,
    })

    // IAM: Role for ECS
    const ecsTaskExecutionRole = new iam.Role(this, 'EcsTaskExecutionRole', {
      assumedBy: new iam.ServicePrincipal('ecs-tasks.amazonaws.com'),
      managedPolicies: [iam.ManagedPolicy.fromAwsManagedPolicyName('PowerUserAccess')],
    })

    // CloudWatch: Log Groups for ECS
    new logs.LogGroup(this, 'EcsNginxLogGroup', {
      logGroupName: `/ecs/${props.ecs.serviceName}/nginx`,
      retention: logs.RetentionDays.TEN_YEARS,
    })
    new logs.LogGroup(this, 'EcsAppServerLogGroup', {
      logGroupName: `/ecs/${props.ecs.serviceName}/app-server`,
      retention: logs.RetentionDays.TEN_YEARS,
    })
    new logs.LogGroup(this, 'EcsAppBatchLogGroup', {
      logGroupName: `/ecs/${props.ecs.serviceName}/app-batch`,
      retention: logs.RetentionDays.TEN_YEARS,
    })

    // Security Group: for ECS
    const ecsSecurityGroup = new ec2.SecurityGroup(this, 'EcsSecurityGroup', { vpc })
    ecsSecurityGroup.addIngressRule(alb.securityGroup, ec2.Port.tcp(8080))

    // Cloudformation Outputs
    new cdk.CfnOutput(this, 'PrivateSubnetAz1', {
      value: vpcSubnetIds[0],
    })
    new cdk.CfnOutput(this, 'PrivateSubnetAz2', {
      value: vpcSubnetIds[1],
    })
    new cdk.CfnOutput(this, 'EcsSecurityGroupId', {
      value: ecsSecurityGroup.securityGroupId,
    })
    new cdk.CfnOutput(this, 'AlbTargetGroupArn', {
      value: alb.targetGroupArn,
    })
    new cdk.CfnOutput(this, 'EcsTaskExecutionRoleArn', {
      value: ecsTaskExecutionRole.roleArn,
    })
  }
}

コードが長すぎて Zenn の文字数制限に引っかかってしまったため続きは次のコメントへ。

shogoggshogogg

前回の続き。まずはテストから。

infra/aws/test/laravel-app-stack.test.ts
import * as cdk from 'aws-cdk-lib'
import { Match, Template } from 'aws-cdk-lib/assertions'
import { beforeAll, describe, it } from 'vitest'
import { LaravelAppStack } from '../lib/laravel-app-stack'

describe('LaravelAppStack', () => {
  let template: Template

  beforeAll(() => {
    const app = new cdk.App()
    const stack = new LaravelAppStack(app, 'TestStack', {
      env: {
        account: '123456789012',
        region: 'ap-northeast-1',
      },
      domainName: 'app.example.org',
      hostedZoneName: 'example.org',
      ecr: {
        repositories: [
          { id: 'foo', repositoryName: 'laravel-app/foo' },
          { id: 'bar', repositoryName: 'laravel-app/bar' },
          { id: 'baz', repositoryName: 'laravel-app/baz' },
        ],
      },
      ecs: {
        clusterName: 'test-laravel-app-cluster',
        serviceName: 'test-laravel-app-service',
      },
      s3: {
        logBucketName: 'example-log-storage',
      },
      vpc: {
        cidr: '192.168.0.0/16',
      },
    })
    template = Template.fromStack(stack)
  })

  it('has a S3 bucket for ALB access logs', () => {
    template.resourceCountIs('AWS::S3::Bucket', 1)
  })

  it('has a S3 bucket with the specified name', () => {
    template.hasResourceProperties('AWS::S3::Bucket', {
      BucketName: 'example-log-storage',
    })
  })

  it('has a VPC', () => {
    template.resourceCountIs('AWS::EC2::VPC', 1)
  })

  it('has a VPC with the specified CIDR', () => {
    template.hasResourceProperties('AWS::EC2::VPC', {
      CidrBlock: '192.168.0.0/16',
    })
  })

  describe('VPC', () => {
    it('has a NAT gateway', () => {
      template.resourceCountIs('AWS::EC2::NatGateway', 1)
    })

    it('has a Internet Gateway', () => {
      template.resourceCountIs('AWS::EC2::InternetGateway', 1)
    })

    it('has 4 subnets', () => {
      template.resourceCountIs('AWS::EC2::Subnet', 4)
    })

    describe('Availability Zone A', () => {
      it('has a public subnet', () => {
        template.hasResourceProperties('AWS::EC2::Subnet', {
          AvailabilityZone: 'dummy1a',
          MapPublicIpOnLaunch: true,
        })
      })

      it('has a private subnet', () => {
        template.hasResourceProperties('AWS::EC2::Subnet', {
          AvailabilityZone: 'dummy1a',
          MapPublicIpOnLaunch: false,
        })
      })
    })

    describe('Availability Zone B', () => {
      it('has a public subnet', () => {
        template.hasResourceProperties('AWS::EC2::Subnet', {
          AvailabilityZone: 'dummy1b',
          MapPublicIpOnLaunch: true,
        })
      })

      it('has a private subnet', () => {
        template.hasResourceProperties('AWS::EC2::Subnet', {
          AvailabilityZone: 'dummy1b',
          MapPublicIpOnLaunch: false,
        })
      })
    })
  })

  it('has 2 security groups', () => {
    template.resourceCountIs('AWS::EC2::SecurityGroup', 2)
  })

  it('has a security group for the Application Load Balancer', () => {
    const [vpcId] = Object.keys(template.findResources('AWS::EC2::VPC'))
    template.hasResourceProperties('AWS::EC2::SecurityGroup', {
      GroupDescription: 'TestStack/Alb/SecurityGroup',
      SecurityGroupIngress: [
        Match.objectLike({
          FromPort: 80,
          IpProtocol: 'tcp',
          ToPort: 80,
        }),
        Match.objectLike({
          FromPort: 443,
          IpProtocol: 'tcp',
          ToPort: 443,
        }),
      ],
      VpcId: { Ref: vpcId },
    })
  })

  it('has a security group for the ECS', () => {
    const [vpcId] = Object.keys(template.findResources('AWS::EC2::VPC'))
    template.hasResourceProperties('AWS::EC2::SecurityGroup', {
      GroupDescription: 'TestStack/EcsSecurityGroup',
      SecurityGroupEgress: [
        Match.objectLike({
          CidrIp: '0.0.0.0/0',
        }),
      ],
      VpcId: { Ref: vpcId },
    })
  })

  it('should allow traffic from the Application Load Balancer to the ECS', () => {
    const securityGroups = template.findResources('AWS::EC2::SecurityGroup')
    const [albSecurityGroupId] = Object.keys(securityGroups).filter(
      ref => securityGroups[ref].Properties.GroupDescription === 'TestStack/Alb/SecurityGroup',
    )
    const [ecsSecurityGroupId] = Object.keys(securityGroups).filter(
      ref => securityGroups[ref].Properties.GroupDescription === 'TestStack/EcsSecurityGroup',
    )
    template.hasResourceProperties('AWS::EC2::SecurityGroupIngress', {
      FromPort: 8080,
      ToPort: 8080,
      GroupId: { 'Fn::GetAtt': [ecsSecurityGroupId, 'GroupId'] },
      SourceSecurityGroupId: { 'Fn::GetAtt': [albSecurityGroupId, 'GroupId'] },
    })
  })

  it('has an ACM certificate for the Application Load Balancer', () => {
    template.resourceCountIs('AWS::CertificateManager::Certificate', 1)
  })

  it('has an ACM certificate with the specified domain name', () => {
    template.hasResourceProperties('AWS::CertificateManager::Certificate', {
      DomainName: 'app.example.org',
    })
  })

  it('has an Application Load Balancer', () => {
    template.resourceCountIs('AWS::ElasticLoadBalancingV2::LoadBalancer', 1)
  })

  it('has an Application Load Balancer that is internet-facing', () => {
    template.hasResourceProperties('AWS::ElasticLoadBalancingV2::LoadBalancer', {
      Scheme: 'internet-facing',
    })
  })

  it('has an Application Load Balancer with the public subnets', () => {
    const subnets = template.findResources('AWS::EC2::Subnet', {
      Properties: {
        MapPublicIpOnLaunch: true,
      },
    })
    const subnetIds = Object.keys(subnets).map(ref => ({ Ref: ref }))
    template.hasResourceProperties('AWS::ElasticLoadBalancingV2::LoadBalancer', {
      Subnets: subnetIds,
    })
  })

  it('has an Application Load Balancer with the security group', () => {
    const securityGroup = template.findResources('AWS::EC2::SecurityGroup', {
      Properties: {
        GroupDescription: 'TestStack/Alb/SecurityGroup',
      },
    })
    const [securityGroupId] = Object.keys(securityGroup)
    template.hasResourceProperties('AWS::ElasticLoadBalancingV2::LoadBalancer', {
      SecurityGroups: [{ 'Fn::GetAtt': [securityGroupId, 'GroupId'] }],
    })
  })

  it('has a target group', () => {
    template.resourceCountIs('AWS::ElasticLoadBalancingV2::TargetGroup', 1)
  })

  it('has a target group with the specified properties', () => {
    template.hasResourceProperties('AWS::ElasticLoadBalancingV2::TargetGroup', {
      Port: 80,
      Protocol: 'HTTP',
      TargetType: 'ip',
    })
  })

  it('has 2 Listeners', () => {
    template.resourceCountIs('AWS::ElasticLoadBalancingV2::Listener', 2)
  })

  it('has an HTTPS Listener', () => {
    const certificates = template.findResources('AWS::CertificateManager::Certificate')
    const [certificateArn] = Object.keys(certificates)
    template.hasResourceProperties('AWS::ElasticLoadBalancingV2::Listener', {
      Certificates: [{ CertificateArn: { Ref: certificateArn } }],
      Port: 443,
      Protocol: 'HTTPS',
    })
  })

  it('has an HTTP Listener', () => {
    template.hasResourceProperties('AWS::ElasticLoadBalancingV2::Listener', {
      Port: 80,
      Protocol: 'HTTP',
    })
  })

  it('should redirect HTTP to HTTPS', () => {
    const listeners = template.findResources('AWS::ElasticLoadBalancingV2::Listener', {
      Properties: {
        Protocol: 'HTTP',
      },
    })
    const [listenerArn] = Object.keys(listeners)
    template.hasResourceProperties('AWS::ElasticLoadBalancingV2::ListenerRule', {
      Actions: [
        {
          RedirectConfig: {
            Protocol: 'HTTPS',
            StatusCode: 'HTTP_301',
          },
          Type: 'redirect',
        },
      ],
      Conditions: [
        {
          Field: 'path-pattern',
          PathPatternConfig: {
            Values: ['*'],
          },
        },
      ],
      ListenerArn: { Ref: listenerArn },
      Priority: 1,
    })
  })

  it('has 2 RecordSets', () => {
    template.resourceCountIs('AWS::Route53::RecordSet', 2)
  })

  it('has an A RecordSet', () => {
    template.hasResourceProperties('AWS::Route53::RecordSet', {
      Name: 'app.example.org.',
      Type: 'A',
    })
  })

  it('has an AAAA RecordSet', () => {
    template.hasResourceProperties('AWS::Route53::RecordSet', {
      Name: 'app.example.org.',
      Type: 'AAAA',
    })
  })

  it('has the number of ECR repositories that is specified by the props', () => {
    template.resourceCountIs('AWS::ECR::Repository', 3)
  })

  it.each([['laravel-app/foo'], ['laravel-app/bar'], ['laravel-app/baz']])(
    'has an ECR repository with the specified name (%s)',
    repositoryName => {
      template.hasResourceProperties('AWS::ECR::Repository', {
        RepositoryName: repositoryName,
      })
    },
  )

  it('has an ECS Cluster', () => {
    template.resourceCountIs('AWS::ECS::Cluster', 1)
  })

  it('has an ECS Cluster with the specified name', () => {
    template.hasResourceProperties('AWS::ECS::Cluster', {
      ClusterName: 'test-laravel-app-cluster',
    })
  })

  it('has an IAM Role', () => {
    template.resourceCountIs('AWS::IAM::Role', 1)
  })

  it('has an IAM Role for ECS Task Execution', () => {
    template.hasResourceProperties('AWS::IAM::Role', {
      AssumeRolePolicyDocument: Match.objectLike({
        Statement: [
          {
            Action: 'sts:AssumeRole',
            Effect: 'Allow',
            Principal: { Service: 'ecs-tasks.amazonaws.com' },
          },
        ],
      }),
      ManagedPolicyArns: [
        Match.objectLike({
          'Fn::Join': ['', ['arn:', { Ref: 'AWS::Partition' }, ':iam::aws:policy/PowerUserAccess']],
        }),
      ],
    })
  })

  it('has 3 Log Groups', () => {
    template.resourceCountIs('AWS::Logs::LogGroup', 3)
  })

  it('has a Log Group for nginx', () => {
    template.hasResourceProperties('AWS::Logs::LogGroup', {
      LogGroupName: '/ecs/test-laravel-app-service/nginx',
      RetentionInDays: 3653,
    })
  })

  it('has a Log Group for app-server', () => {
    template.hasResourceProperties('AWS::Logs::LogGroup', {
      LogGroupName: '/ecs/test-laravel-app-service/app-server',
      RetentionInDays: 3653,
    })
  })

  it('has a Log Group for app-batch', () => {
    template.hasResourceProperties('AWS::Logs::LogGroup', {
      LogGroupName: '/ecs/test-laravel-app-service/app-batch',
      RetentionInDays: 3653,
    })
  })

  it('has an Output for the first subnet ID', () => {
    const subnets = template.findResources('AWS::EC2::Subnet', {
      Properties: {
        MapPublicIpOnLaunch: false,
      },
    })
    const subnetIds = Object.keys(subnets)
    template.hasOutput('PrivateSubnetAz1', {
      Value: { Ref: subnetIds[0] },
    })
  })

  it('has an Output for the second subnet ID', () => {
    const subnets = template.findResources('AWS::EC2::Subnet', {
      Properties: {
        MapPublicIpOnLaunch: false,
      },
    })
    const subnetIds = Object.keys(subnets)
    template.hasOutput('PrivateSubnetAz2', {
      Value: { Ref: subnetIds[1] },
    })
  })

  it('has an Output for the security group ID', () => {
    const securityGroups = template.findResources('AWS::EC2::SecurityGroup', {
      Properties: {
        GroupDescription: 'TestStack/EcsSecurityGroup',
      },
    })
    const [securityGroupId] = Object.keys(securityGroups)
    template.hasOutput('EcsSecurityGroupId', {
      Value: { 'Fn::GetAtt': [securityGroupId, 'GroupId'] },
    })
  })

  it('has an Output for the target group ARN', () => {
    const targetGroups = template.findResources('AWS::ElasticLoadBalancingV2::TargetGroup')
    const [targetGroupArn] = Object.keys(targetGroups)
    template.hasOutput('AlbTargetGroupArn', {
      Value: { Ref: targetGroupArn },
    })
  })

  it('has an Output for the ECS Task Execution Role ARN', () => {
    const roles = template.findResources('AWS::IAM::Role')
    const [roleArn] = Object.keys(roles)
    template.hasOutput('EcsTaskExecutionRoleArn', {
      Value: { 'Fn::GetAtt': [roleArn, 'Arn'] },
    })
  })
})

続いて infra/aws/bin/laravel-app.ts を以下の通り作成する。

infra/aws/bin/laravel-app.ts
#!/usr/bin/env node
import 'source-map-support/register'
import * as cdk from 'aws-cdk-lib'
import * as dotenv from 'dotenv'
import { LaravelAppStack } from '../lib/laravel-app-stack'

// Load environment variables from .env file
dotenv.config()

const account = process.env.APP_AWS_ACCOUNT ?? process.env.CDK_DEFAULT_ACCOUNT
const region = process.env.APP_AWS_REGION ?? process.env.CDK_DEFAULT_REGION

const stackName = process.env.APP_STACK_NAME ?? 'ExampleLaravelAppStack'

const app = new cdk.App()
new LaravelAppStack(app, stackName, {
  env: {
    account,
    region,
  },
  hostedZoneName: process.env.APP_HOSTED_ZONE_NAME ?? 'example.com',
  domainName: process.env.APP_DOMAIN_NAME ?? 'app.example.com',
  ecr: {
    repositories: [
      { id: 'Nginx', repositoryName: 'aws-cdk-ecspresso-laravel-example-2024/nginx-prod' },
      { id: 'AppCli', repositoryName: 'aws-cdk-ecspresso-laravel-example-2024/app-cli-prod' },
      { id: 'AppServer', repositoryName: 'aws-cdk-ecspresso-laravel-example-2024/app-server-prod' },
    ],
  },
  ecs: {
    clusterName: process.env.APP_ECS_CLUSTER_NAME ?? 'example-laravel-app-cluster',
    serviceName: process.env.APP_ECS_SERVICE_NAME ?? 'example-laravel-app-service',
  },
  s3: {
    logBucketName: process.env.APP_LOG_BUCKET_NAME ?? 'example-log-bucket',
  },
  vpc: {
    cidr: '192.168.0.0/16',
  },
})

cdk.json の修正も忘れずに。

infra/aws/cdk.json
-  "app": "npx ts-node --prefer-ts-exts bin/aws.ts",
+  "app": "npx ts-node --prefer-ts-exts bin/laravel-app.ts",

AWS CDK プロジェクトの package.jsoncdk 関連のスクリプトを追加する。

infra/aws/package.json
     "cdk": "cdk",
+    "cdk:deploy": "cdk deploy",
+    "cdk:destroy": "cdk destroy",
+    "cdk:diff": "cdk diff",
+    "cdk:synth": "cdk synth",
     "clean": "rimraf {bin,lib}/*.{d.ts,js} {bin,lib}/**/*.{d.ts,js}",

最後に Taskfile.yaml を編集し、AWS CDK 関連のタスクを追加する。

Taskfile.yaml
   docker:compose:down:
     cmds:
       - docker compose down
+  #
+  # AWS CDK tasks
+  #
+  cdk:deploy:
+    cmds:
+      - npm run cdk:deploy -w infra/aws
+  cdk:destroy:
+    cmds:
+      - npm run cdk:destroy -w infra/aws
+  cdk:diff:
+    cmds:
+      - npm run cdk:diff -w infra/aws
+  cdk:synth:
+    cmds:
+      - npm run cdk:synth -w infra/aws

これで CDK の準備は整った。

参考にした記事

https://qiita.com/yoyoyo_pg/items/5921801e7f674f4e1023

shogoggshogogg

ECS を使ってコンテナを立ちあげるためには、当然ながら今回用意したコンテナイメージを ECS が pull できるようにする必要がある。

もちろん Docker Hub を使ってもいいんだけど、今回は AWS CDK で用意した ECR に push していく。

まずは本番用イメージをビルドできるように各種 Dockerfile にそれぞれ次の通り追記していく。

infra/docker/nginx/Dockerfile
 #
 # nginx-dev
 #
 FROM nginx-base AS nginx-dev

 COPY --from=nginx-config /opt/etc/nginx/conf.d/ /etc/nginx/conf.d/

+
+#
+# nginx-prod
+#
+FROM nginx-base AS nginx-prod
+
+COPY --from=nginx-config /opt/etc/nginx/conf.d/ /etc/nginx/conf.d/
+
+COPY public/ /app/public/
infra/docker/app/Dockerfile
 #
 # app-server-dev
 #
 FROM app-server-base AS app-server-dev
 
 COPY infra/docker/app/assets/app-server-dev/app-server-dev.ini /usr/local/etc/php/conf.d/

 RUN set -eux; yes "" | pecl install xdebug; docker-php-ext-enable xdebug
+
+#
+# app-cli-prod
+#
+FROM app-cli-base AS app-cli-prod
+
+RUN mkdir -p /app
+RUN mv /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini
+
+COPY --from=composer:latest /usr/bin/composer /usr/bin/
+COPY composer.* /app/
+RUN composer install --no-dev --no-autoloader --no-progress --no-scripts
+
+COPY . /app
+RUN composer dump-autoload --no-dev --optimize
+
+RUN chown -R www-data:www-data /app
+
+#
+# app-server-prod
+#
+FROM app-server-base AS app-server-prod
+
+RUN mkdir -p /app
+RUN mv /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini
+
+COPY --from=composer:latest /usr/bin/composer /usr/bin/
+COPY composer.* /app/
+RUN composer install --no-dev --no-autoloader --no-progress --no-scripts
+
+COPY . /app
+RUN composer dump-autoload --no-dev --optimize
+
+RUN chown -R www-data:www-data /app

これらをビルドするために長いコマンドを書いてもいいんだけど、docker-compose.yaml 形式の YAML ファイルを作成し docker compose build からビルドするようにすると可読性が高く、管理しやすくなる。

今回はビルド用に docker-compose-build.yaml を作成することにする。

docker-compose-build.yaml
---
services:
  app-cli:
    build:
      context: .
      dockerfile: infra/docker/app/Dockerfile
      target: app-cli-prod
    image: aws-cdk-ecspresso-laravel-example-2024/app-cli-prod:latest
    platform: linux/amd64
  app-server:
    build:
      context: .
      dockerfile: infra/docker/app/Dockerfile
      target: app-server-prod
    image: aws-cdk-ecspresso-laravel-example-2024/app-server-prod:latest
    platform: linux/amd64
  nginx:
    build:
      context: .
      dockerfile: infra/docker/nginx/Dockerfile
      target: nginx-prod
      args:
        - TIMEZONE=Asia/Tokyo
        - APP_DOMAIN_NAME=${APP_DOMAIN_NAME}
    image: aws-cdk-ecspresso-laravel-example-2024/nginx-prod:latest
    platform: linux/amd64

Taskfile.yaml を編集し、task docker:build で本番用のイメージをビルドできるようにする。

Taskfile.yaml
 tasks:
   #
   # Docker tasks
   #
+  docker:build:
+    cmds:
+      - docker compose -f docker-compose-build.yaml --env-file=infra/aws/.env build
   docker:compose:build:
     cmds:
       - docker compose build

つづく。

shogoggshogogg

Taskfile.yaml を編集し、ECR にプッシュできるようにする。

Taskfile.yaml
 dotenv:
   - .env
   - infra/aws/.env
+
+vars:
+  AWS_REGION: '{{.APP_AWS_REGION}}'
+  AWS_ECR_HOST: '{{.APP_AWS_ACCOUNT}}.dkr.ecr.{{.APP_AWS_REGION}}.amazonaws.com'
+  DOCKER_IMAGE_PREFIX: aws-cdk-ecspresso-laravel-example-2024
+  DOCKER_IMAGES:
+    - app-cli-prod
+    - app-server-prod
+    - nginx-prod
+  DATE:
+    sh: date '+%Y%m%d%H%M%S'
+  REVISION:
+    sh: git rev-parse --short HEAD
+  TAG: '{{.DATE}}-{{.REVISION}}'
Taskfile.yaml
   cdk:synth:
     cmds:
       - npm run cdk:synth -w infra/aws
+  #
+  # AWS ECR tasks
+  #
+  aws:ecr:login:
+    cmds:
+      - aws ecr get-login-password --region {{.APP_AWS_REGION}} | docker login --username AWS --password-stdin {{.AWS_ECR_HOST}}
+  aws:ecr:push:
+    deps:
+      - docker:build
+    cmds:
+      - task: aws:ecr:login
+      - for: { var: DOCKER_IMAGES }
+        cmd: docker tag {{.DOCKER_IMAGE_PREFIX}}/{{.ITEM}}:latest {{.AWS_ECR_HOST}}/{{.DOCKER_IMAGE_PREFIX}}/{{.ITEM}}:{{.TAG}}
+      - for: { var: DOCKER_IMAGES }
+        cmd: docker tag {{.DOCKER_IMAGE_PREFIX}}/{{.ITEM}}:latest {{.AWS_ECR_HOST}}/{{.DOCKER_IMAGE_PREFIX}}/{{.ITEM}}:latest
+      - for: { var: DOCKER_IMAGES }
+        cmd: docker push {{.AWS_ECR_HOST}}/{{.DOCKER_IMAGE_PREFIX}}/{{.ITEM}}:{{.TAG}}
+      - for: { var: DOCKER_IMAGES }
+        cmd: docker push {{.AWS_ECR_HOST}}/{{.DOCKER_IMAGE_PREFIX}}/{{.ITEM}}:latest

これにより task aws:ecr:push を実行すると以下の処理が行われる。

  1. 現在日時 + Git のリビジョン(短縮形)からタグを決定(生成)する。
  2. docker:build タスクを実行し、Docker イメージをビルドする。
  3. aws:ecr:login タスクを実行し、ECR にログインする。
  4. 各イメージそれぞれについて ECR 向けに 1. で決定したタグおよび latest でタグ付けする。
  5. タグ付けしたイメージを ECR にプッシュする。

ECR の準備(≒コンテナイメージの準備)はこれで完了。次から ecspresso の設定を行う。