Open33

SQLite / LiteFS のデータベースを使った Remix アプリを fly.io にデプロイしてみる。

Coji MizoguchiCoji Mizoguchi

flyio のダッシュボードにいつのまにか「LiteFS Cloud」というのができてた。ガイドに従って、それを作成する。

LiteFS 作成ダイアログで以下を入力してCreate

Name: mycluster
Region: Ashburn, Virginia (US) (iad)

するとトークンの表示になるのでクリップボードにコピーして保存しておく。

Coji MizoguchiCoji Mizoguchi

自分の remix app boilerplate でアプリを作成。

pnpm create remix --template coji/remix-app-boilerplate
Coji MizoguchiCoji Mizoguchi

PostgreSQL 用になってるので、以下のように編集して SQLite 用に切り替える。

prisma/schema.prisma
datasource db {
-  provider = "postgresql"
+  provider = "sqlite"
  url      = env("DATABASE_URL")
}

.env.example から .env にコピーした上で:

.env
- DATABASE_URL="postgres://postgres:postgres@localhost:5432/postgres?sslmode=disable"
DATABASE_URL=file:../data/data.db?connection_limit=1

さらに PostgreSQL 用に作ってあった migrations を削除しておく。

rm -R prisma/migrations
Coji MizoguchiCoji Mizoguchi

ひとまずローカルで動くか実験。
まずマイグレーションを実行してDBを作成

$ pnpm prisma migrate dev
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": SQLite database "data.db" at "file:../data/data.db?connection_limit=1"

SQLite database data.db created at file:../data/data.db?connection_limit=1

✔ Enter a name for the new migration: … init
Applying migration `20230822125029_init`

The following migration(s) have been created and applied from new schema changes:

migrations/
  └─ 20230822125029_init/
    └─ migration.sql

Your database is now in sync with your schema.

✔ Generated Prisma Client (5.1.1 | library) to ./node_modules/.pnpm/@prisma+cl
ient@5.1.1_prisma@5.1.1/node_modules/@prisma/client in 31ms


Running seed command `tsx prisma/seed.ts` ...
Database has been seeded. 🌱

🌱  The seed command has been executed.

できたので remix dev サーバを起動してアクセスしてみる。

$ pnpm dev

> remix-app@0.1.0 dev /Users/coji/progs/spike/LiteFS/litefs-speedrun
> remix dev


 💿  remix dev

 info  building...
 info  built (1.2s)
Remix App Server started at http://localhost:3000 (http://192.168.0.14:3000)

ブラウザで localhost:3000 にアクセスして画面表示されるのを確認。

Coji MizoguchiCoji Mizoguchi

元記事の手順のとおり、litefs.yml ファイルをコピペしてプロジェクトルートに作成。アプリに合わせて編集する。

litefs.yml

proxy:
  addr: ':8080'
- target: 'localhost:8081'
+ target: 'localhost:3000'

exec:
-  - cmd: 'litefs-example -addr :8081 -dsn /litefs/db'
+  - cmd: 'pnpm start'

Coji MizoguchiCoji Mizoguchi

Dockerfile を編集。

LiteFS の動作に必要な依存モジュールを追加する。

Dockerfile
# Install openssl for Prisma
RUN apt-get update \
-  && apt-get install --no-install-recommends -y openssl procps vim-tiny sqlite3 \
+  && apt-get install --no-install-recommends -y openssl procps vim-tiny sqlite3 ca-certificates fuse3 \
  && apt-get clean \
  && npm i -g pnpm@${PNPM_VERSION} \
  && rm -rf /var/lib/apt/lists/*

Dockerfile の末尾に LiteFS 関係のファイルのCOPYを追加し、CMD を LiteFS を起動するように変更。

Dockerfile
+COPY --from=flyio/litefs:0.5 /usr/local/bin/litefs /usr/local/bin/litefs

-CMD ["pnpm", "start"]
+CMD ["litefs", "mount"]
Coji MizoguchiCoji Mizoguchi

Flyio のセットアップと起動。

まずアプリを作成。名前は適当だけどユニークじゃないといけないのでそれっぽいのを。

$ fly apps create coji-litefs-speedrun
New app created: coji-litefs-speedrun

fly.toml を作成したアプリ名に合わせて修正。

Dockerfile
-app = "coji-app-1"
+app = "coji-litefs-speedrun"

ボリュームの作成。とりあえず最小サイズの 1GB で。

$ fly volumes create litefs -a coji-litefs-speedrun -s 1 -r nrt -y
        ID: vol_krklz3nd6nwxqlwv
      Name: litefs
       App: coji-litefs-speedrun
    Region: nrt
      Zone: 6082
   Size GB: 1
 Encrypted: true
Created at: 22 Aug 23 13:12 UTC

consul というのを作るみたい。ノードの切り替えを担当するやつかな?

$ fly consul attach
Secrets are staged for the first deployment

保存してた LiteFS Cloud のトークンを secrets に設定。

fly secrets set "LITEFS_CLOUD_TOKEN=yoursecrettoken"
Secrets are staged for the first deployment

Remixアプリの中で使ってる環境変数も設定する。DATABASE_URLの場所は litefs のマウントポイントの中にする。その他は本番用なので適宜変更のこと。

$ fly secrets import
DATABASE_URL=file:/litefs/data/data.db?connection_limit=1
GOOGLE_CLIENT_ID=your-google-client-id
GOOGLE_CLIENT_SECRET=your-google-client-secret
SESSION_SECRET=reallysecuresecret
^D
Secrets are staged for the first deployment
Coji MizoguchiCoji Mizoguchi

デプロイしてみる。

$ fly deploy
==> Verifying app config
Validating /Users/coji/progs/spike/LiteFS/litefs-speedrun/fly.toml
Platform: machines
✓ Configuration is valid
--> Verified app config
==> Building image
Remote builder fly-builder-throbbing-bird-614 ready
==> Building image with Docker
--> docker host: 20.10.12 linux x86_64
[+] Building 2.1s (26/26) FINISHED                                                              
 => [internal] load build definition from Dockerfile                                       0.0s
 => => transferring dockerfile: 69B                                                        0.0s
 => [internal] load .dockerignore                                                          0.0s
 => => transferring context: 72B                                                           0.0s
 => [internal] load metadata for docker.io/library/node:18-slim                            1.5s
 => FROM docker.io/flyio/litefs:0.5                                                        0.6s
 => => resolve docker.io/flyio/litefs:0.5                                                  0.6s
 => [internal] load build context                                                          0.2s
 => => transferring context: 8.78kB                                                        0.2s
 => [base 1/2] FROM docker.io/library/node:18-slim@sha256:fea6dbb8697baddc6c58f29337bbd3b  0.0s
 => CACHED [base 2/2] RUN apt-get update   && apt-get install --no-install-recommends -y   0.0s
 => CACHED [build 1/8] WORKDIR /app                                                        0.0s
 => CACHED [deps 2/3] COPY pnpm-lock.yaml ./                                               0.0s
 => CACHED [deps 3/3] RUN pnpm fetch                                                       0.0s
 => CACHED [build 2/8] COPY --from=deps /app/node_modules /app/node_modules                0.0s
 => CACHED [build 3/8] COPY package.json pnpm-lock.yaml ./                                 0.0s
 => CACHED [production-deps 4/4] RUN pnpm install --offline --frozen-lockfile --prod       0.0s
 => CACHED [stage-4 2/8] COPY --from=production-deps /app/package.json /app/package.json   0.0s
 => CACHED [build 4/8] RUN pnpm install --offline --frozen-lockfile                        0.0s
 => CACHED [build 5/8] COPY prisma .                                                       0.0s
 => CACHED [build 6/8] RUN pnpm exec prisma generate                                       0.0s
 => CACHED [build 7/8] COPY . .                                                            0.0s
 => CACHED [build 8/8] RUN pnpm run build                                                  0.0s
 => CACHED [stage-4 3/8] COPY --from=build /app/node_modules /app/node_modules             0.0s
 => CACHED [stage-4 4/8] COPY --from=build /app/tsconfig.json /app/tsconfig.json           0.0s
 => CACHED [stage-4 5/8] COPY --from=build /app/prisma /app/prisma                         0.0s
 => CACHED [stage-4 6/8] COPY --from=build /app/build /app/build                           0.0s
 => CACHED [stage-4 7/8] COPY --from=build /app/public /app/public                         0.0s
 => CACHED [stage-4 8/8] COPY --from=flyio/litefs:0.5 /usr/local/bin/litefs /usr/local/bi  0.0s
 => exporting to image                                                                     0.0s
 => => exporting layers                                                                    0.0s
 => => writing image sha256:1dff966946531005a89978f97eead3fe0b81d45d315f365560cf6ca51183c  0.0s
 => => naming to registry.fly.io/coji-litefs-speedrun:deployment-01H8EQ3HQ119CSA6C7S57BTE  0.0s
--> Building image done
==> Pushing image to fly
The push refers to repository [registry.fly.io/coji-litefs-speedrun]
9cc2fc0ced8e: Layer already exists 
6ab3dfdd2f90: Layer already exists 
2e9752edc891: Layer already exists 
97f4864f77b1: Layer already exists 
323ef8439a0d: Layer already exists 
cc49c685adc4: Layer already exists 
cf385fbb2bbf: Layer already exists 
5c7386aac7ef: Layer already exists 
6e6316583a7a: Layer already exists 
303bf1bc2bed: Layer already exists 
5669c57a8456: Layer already exists 
aecafe007cb9: Layer already exists 
8ff1ae88ec82: Layer already exists 
511780f88f80: Layer already exists 
deployment-01H8EQ3HQ119CSA6C7S57BTEK7: digest: sha256:ad96b501e7d8a763aee42ecb5e1cddba27aac62e495f7ca329a819e444791536 size: 3252
--> Pushing image done
image: registry.fly.io/coji-litefs-speedrun:deployment-01H8EQ3HQ119CSA6C7S57BTEK7
image size: 682 MB

Watch your deployment at https://fly.io/apps/coji-litefs-speedrun/monitoring

Running coji-litefs-speedrun release_command: npx prisma migrate deploy
  release_command 3d8d11db19de89 completed successfully
Process groups have changed. This will:
 * create 1 "app" machine

No machines in group app, launching a new machine
Error: error creating a new machine: failed to launch VM: aborted: could not reserve resource for machine: insufficient memory available to fulfill request

あれ、ダメだ

Coji MizoguchiCoji Mizoguchi

flyio のダッシュボードからMonitoring を見ると以下のエラーがでて落ちてる。

INFO Preparing to run: `docker-entrypoint.sh litefs mount` as root
INFO [fly api proxy] listening at /.fly/api
2023/08/22 13:51:27 listening on [fdaa:2:5a0e:a7b:fc:b98c:75f3:2]:22 (DNS: [fdaa::3]:53)
nrt [info] ERROR: config file not found

さて、なんでかな〜

Coji MizoguchiCoji Mizoguchi

litefs.yml が Dockerfile に含まれないから、そりゃそうだわさ。と思ってどこに配置したらいいかサンプルリポジトリの Dockerfile を見たところ /etc/litefs.yml で良さそうだった。
https://github.com/superfly/litefs-example/blob/main/Dockerfile

というわけで、Dockerfile の最後の CMD の手前に以下を追加

COPY litefs.yml /etc/litefs.yml

したうえで、再度デプロイする。

Coji MizoguchiCoji Mizoguchi

起動しない。ログこんなかんじ。

nrt [info] 2023/08/22 14:00:57 listening on [fdaa:2:5a0e:a7b:fc:b98c:75f3:2]:22 (DNS: [fdaa::3]:53)
nrt [info] config file read from /etc/litefs.yml
nrt [info] LiteFS v0.5.4, commit=9173accf2f0c0e5288383c2706cf8d132ad27f2d
nrt [info] level=INFO msg="host environment detected" type=fly.io
nrt [info] level=INFO msg="litefs cloud backup client configured: https://litefs.fly.io"
nrt [info] level=INFO msg="Using Consul to determine primary"
nrt [info] level=INFO msg="initializing consul: key=litefs/coji-litefs-speedrun url=https://:a605a50a-ddbb-0258-a325-e34e8a5d1a4d@consul-syd-5.fly-shared.net/coji-litefs-speedrun-jpon17vz7kk1dgr4/ hostname=5683d3d0b12048 advertise-url=http://5683d3d0b12048.vm.coji-litefs-speedrun.internal:20202"
nrt [info] level=INFO msg="LiteFS mounted to: /litefs"
nrt [info] level=INFO msg="http server listening on: http://localhost:20202"
nrt [info] level=INFO msg="waiting to connect to cluster"
nrt [info] level=INFO msg="FE46FD0B3B82EB00: primary lease acquired, advertising as http://5683d3d0b12048.vm.coji-litefs-speedrun.internal:20202"
nrt [info] level=INFO msg="connected to cluster, ready"
nrt [info] level=INFO msg="node is a candidate, automatically promoting to primary"
nrt [info] level=INFO msg="node is already primary, skipping promotion"
nrt [info] level=INFO msg="proxy server listening on: http://localhost:8080"
nrt [info] level=INFO msg="starting background subprocess: pnpm [start]"
nrt [info] level=INFO msg="begin primary backup stream: url=https://litefs.fly.io"
nrt [info] waiting for signal or subprocess to exit
nrt [info] > remix-app@0.1.0 start /app
nrt [info] > remix-serve build
nrt [info] level=INFO msg="begin streaming backup" full-sync-interval=10s
nrt [info] Error: listen EADDRINUSE: address already in use :::8080
nrt [info] at Server.setupListenHandle [as _listen2] (node:net:1751:16)
nrt [info] at listenInCluster (node:net:1799:12)
nrt [info] at Server.listen (node:net:1887:7)
nrt [info] at Function.listen (/app/node_modules/.pnpm/express@4.18.2/node_modules/express/lib/application.js:635:24)
nrt [info] at Object.<anonymous> (/app/node_modules/.pnpm/@remix-run+serve@1.19.3/node_modules/@remix-run/serve/dist/cli.js:52:84)
nrt [info] at Module._compile (node:internal/modules/cjs/loader:1256:14)
nrt [info] at Object.Module._extensions..js (node:internal/modules/cjs/loader:1310:10)
nrt [info] at Module.load (node:internal/modules/cjs/loader:1119:32)
nrt [info] at Function.Module._load (node:internal/modules/cjs/loader:960:12)
nrt [info] at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
nrt [info]  ELIFECYCLE  Command failed with exit code 1.
nrt [info] subprocess exited with error code 1, litefs shutting down

これですね。litefs.yml で設定してあった気がする

nrt [info] Error: listen EADDRINUSE: address already in use :::8080
Coji MizoguchiCoji Mizoguchi

fly.toml で設定してる remix app の待ち受けポート番号が 8080 で被ってた。
というわけで修正。とりあえず3000番にしとく。

fly.toml
[env]
-  PORT = "8080"
+  PORT = "3000"

[http_service]
-  internal_port = 8080
+  internal_port = 3000
Coji MizoguchiCoji Mizoguchi

起動しました。わーい。

https://coji-litefs-speedrun.fly.dev/

デプロイのログ

$ fly deploy
==> Verifying app config
Validating /Users/coji/progs/spike/LiteFS/litefs-speedrun/fly.toml
Platform: machines
✓ Configuration is valid
--> Verified app config
==> Building image
Remote builder fly-builder-throbbing-bird-614 ready
==> Building image with Docker
--> docker host: 20.10.12 linux x86_64
[+] Building 2.3s (27/27) FINISHED                                                         
 => [internal] load build definition from Dockerfile                                  0.0s
 => => transferring dockerfile: 69B                                                   0.0s
 => [internal] load .dockerignore                                                     0.0s
 => => transferring context: 72B                                                      0.0s
 => [internal] load metadata for docker.io/library/node:18-slim                       1.7s
 => FROM docker.io/flyio/litefs:0.5                                                   0.5s
 => => resolve docker.io/flyio/litefs:0.5                                             0.5s
 => [internal] load build context                                                     0.1s
 => => transferring context: 8.78kB                                                   0.1s
 => [base 1/2] FROM docker.io/library/node:18-slim@sha256:fea6dbb8697baddc6c58f29337  0.0s
 => CACHED [base 2/2] RUN apt-get update   && apt-get install --no-install-recommend  0.0s
 => CACHED [build 1/8] WORKDIR /app                                                   0.0s
 => CACHED [deps 2/3] COPY pnpm-lock.yaml ./                                          0.0s
 => CACHED [deps 3/3] RUN pnpm fetch                                                  0.0s
 => CACHED [build 2/8] COPY --from=deps /app/node_modules /app/node_modules           0.0s
 => CACHED [build 3/8] COPY package.json pnpm-lock.yaml ./                            0.0s
 => CACHED [production-deps 4/4] RUN pnpm install --offline --frozen-lockfile --prod  0.0s
 => CACHED [stage-4 2/9] COPY --from=production-deps /app/package.json /app/package.  0.0s
 => CACHED [build 4/8] RUN pnpm install --offline --frozen-lockfile                   0.0s
 => CACHED [build 5/8] COPY prisma .                                                  0.0s
 => CACHED [build 6/8] RUN pnpm exec prisma generate                                  0.0s
 => CACHED [build 7/8] COPY . .                                                       0.0s
 => CACHED [build 8/8] RUN pnpm run build                                             0.0s
 => CACHED [stage-4 3/9] COPY --from=build /app/node_modules /app/node_modules        0.0s
 => CACHED [stage-4 4/9] COPY --from=build /app/tsconfig.json /app/tsconfig.json      0.0s
 => CACHED [stage-4 5/9] COPY --from=build /app/prisma /app/prisma                    0.0s
 => CACHED [stage-4 6/9] COPY --from=build /app/build /app/build                      0.0s
 => CACHED [stage-4 7/9] COPY --from=build /app/public /app/public                    0.0s
 => CACHED [stage-4 8/9] COPY --from=flyio/litefs:0.5 /usr/local/bin/litefs /usr/loc  0.0s
 => CACHED [stage-4 9/9] COPY litefs.yml /etc/litefs.yml                              0.0s
 => exporting to image                                                                0.0s
 => => exporting layers                                                               0.0s
 => => writing image sha256:a698a69de73bbbfd6bfeb733fc72762048cf84c7ad47ff4582fc2502  0.0s
 => => naming to registry.fly.io/coji-litefs-speedrun:deployment-01H8ES4PEF7BM7C2WTK  0.0s
--> Building image done
==> Pushing image to fly
The push refers to repository [registry.fly.io/coji-litefs-speedrun]
bae1cb45e3c8: Layer already exists 
9cc2fc0ced8e: Layer already exists 
6ab3dfdd2f90: Layer already exists 
2e9752edc891: Layer already exists 
97f4864f77b1: Layer already exists 
323ef8439a0d: Layer already exists 
cc49c685adc4: Layer already exists 
cf385fbb2bbf: Layer already exists 
5c7386aac7ef: Layer already exists 
6e6316583a7a: Layer already exists 
303bf1bc2bed: Layer already exists 
5669c57a8456: Layer already exists 
aecafe007cb9: Layer already exists 
8ff1ae88ec82: Layer already exists 
511780f88f80: Layer already exists 
deployment-01H8ES4PEF7BM7C2WTKVBV6YGV: digest: sha256:1cf0cce7fbe1960d3e527049970c1debd66e8332ff69980f6de3c022e37ca6a2 size: 3460
--> Pushing image done
image: registry.fly.io/coji-litefs-speedrun:deployment-01H8ES4PEF7BM7C2WTKVBV6YGV
image size: 682 MB

Watch your deployment at https://fly.io/apps/coji-litefs-speedrun/monitoring

Running coji-litefs-speedrun release_command: npx prisma migrate deploy
  release_command 4d89440be0d487 completed successfully
Updating existing machines in 'coji-litefs-speedrun' with rolling strategy
  [1/1] Machine 5683d3d0b12048 [app] update finished: success
  Finished deploying

Visit your newly deployed app at https://coji-litefs-speedrun.fly.dev/
Coji MizoguchiCoji Mizoguchi

これで、同一リージョンで複数Machine動かしたときに、マルチマスターで書き込みできるようになってるのだろうか。それを試すのはまた明日。

Coji MizoguchiCoji Mizoguchi

litefs起動前に db マイグレーションが走ってるので、起動後 /litefs 上に db ファイルが存在しなかった。
start.sh 作ってそこでマイグレーションして、remix の server 起動させるようにしなければならない

Coji MizoguchiCoji Mizoguchi
  1. マイグレーションを litefs 起動後に行うように、 start.sh スクリプトを作成しそれを litefs から起動するように。ここで prisma migrate でメモリあふれるケースがあるので、indie stack を参考に swapon を設定。
start.sh
#!/bin/sh

# allocate swap space
fallocate -l 512M /swapfile
chmod 0600 /swapfile
mkswap /swapfile
echo 10 > /proc/sys/vm/swappiness
swapon /swapfile
echo 1 > /proc/sys/vm/overcommit_memory

npx prisma migrate deploy
pnpm run start
  1. prisma migrate deploy でデータベースファイルが初期作成される際に、litefs が管理するディレクトリ下にディレクトリを作成しようとするとエラーとなるので、DATABASE_URLを /litefs/data.db のように直接置くように修正。
  2. epix stack を参考に、Dockerfile 上で各種環境変数を設定し、それを litefs.yml で使うように変更。
Dockerfile
ENV FLY="true"
ENV LITEFS_DIR="/litefs"
ENV DATABASE_FILENAME="data.db"
ENV DATABASE_PATH="$LITEFS_DIR/$DATABASE_FILENAME"
ENV DATABASE_URL="file:$DATABASE_PATH"
ENV INTERNAL_PORT="8080"
ENV PORT="3000"
ENV NODE_ENV="production"
litefs.yml
fuse:
  dir: '${LITEFS_DIR}'

proxy:
  addr: ':${INTERNAL_PORT}'
  target: 'localhost:${PORT}'
  db: '${DATABASE_FILENAME}'

その他不要な認証などの処理を削除。

Coji MizoguchiCoji Mizoguchi

10台に増やしてやってみる。

fly scale count 10 --with-new-volumes
App 'coji-litefs-speedrun' is going to be scaled according to this plan:
  +9 machines for group 'app' on region 'nrt' of size 'shared-cpu-1x'
  +9 volumes  for group 'app' in region 'nrt'
? Scale app coji-litefs-speedrun? (y/N)
Executing scale plan
  Creating volume litefs region:nrt size:1GiB
  Created e784773ce62683 group:app region:nrt size:shared-cpu-1x volume:vol_5vgl3x3wn7ne9gjr
  Creating volume litefs region:nrt size:1GiB
  Created 32874ddda303e8 group:app region:nrt size:shared-cpu-1x volume:vol_krklyxy285xw8dkv
  Creating volume litefs region:nrt size:1GiB
  Created 148e441f166789 group:app region:nrt size:shared-cpu-1x volume:vol_54573g32opgw6k1v
  Creating volume litefs region:nrt size:1GiB
  Created 1781774a22e489 group:app region:nrt size:shared-cpu-1x volume:vol_p4m5lx8nnlpk88dr
  Creating volume litefs region:nrt size:1GiB
  Created 17812deb5e5189 group:app region:nrt size:shared-cpu-1x volume:vol_q4qop3gp63ednqwr
  Creating volume litefs region:nrt size:1GiB
Error: failed to launch VM: aborted: could not reserve resource for machine: insufficient memory available to fulfill request

おちた。

Coji MizoguchiCoji Mizoguchi

もっかい。

fly scale count 10 --with-new-volumes
App 'coji-litefs-speedrun' is going to be scaled according to this plan:
  +4 machines for group 'app' on region 'nrt' of size 'shared-cpu-1x'
  +4 volumes  for group 'app' in region 'nrt'
? Scale app coji-litefs-speedrun? Yes
Executing scale plan
  Creating volume litefs region:nrt size:1GiB
  Created 3d8d114f003089 group:app region:nrt size:shared-cpu-1x volume:vol_zrenmg8991zo291r
  Creating volume litefs region:nrt size:1GiB
  Created 9080e445a1e928 group:app region:nrt size:shared-cpu-1x volume:vol_9vw023j79lm2qoq4
  Creating volume litefs region:nrt size:1GiB
  Created e784e666fe2e58 group:app region:nrt size:shared-cpu-1x volume:vol_8r6op23k3xqnyejr
  Creating volume litefs region:nrt size:1GiB
  Created 3287332a626385 group:app region:nr

完了。

Coji MizoguchiCoji Mizoguchi

読み込みは問題ないけど、書き込みしようとするとエラー

2023-08-23T07:06:39.432 app[3d8d114f003089] nrt [info] level=INFO msg="fuse: create(): cannot create journal: read only replica"
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] PrismaClientUnknownRequestError:
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] Invalid `prisma.message.create()` invocation:
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] Error occurred during query execution:
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(SqliteFailure(Error { code: ReadOnly, extended_code: 1544 }, Some("attempt to write a readonly database"))), transient: false })
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] at Hr.handleRequestError (/app/build/index.js:48404:379)
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] at Hr.handleAndLogRequestError (/app/build/index.js:48391:16)
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] at Hr.request (/app/build/index.js:48382:16)
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] at l2 (/app/build/index.js:48737:22)
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] at addMessage (/app/build/index.js:55782:101)
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] at action (/app/build/index.js:55793:31)
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] at Object.callRouteActionRR (/app/node_modules/.pnpm/@remix-run+server-runtime@1.19.3/node_modules/@remix-run/server-runtime/dist/data.js:35:16)
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] at callLoaderOrAction (/app/node_modules/.pnpm/@remix-run+router@1.7.2/node_modules/@remix-run/router/router.ts:3671:16)
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] at submit (/app/node_modules/.pnpm/@remix-run+router@1.7.2/node_modules/@remix-run/router/router.ts:2829:16)
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] at queryImpl (/app/node_modules/.pnpm/@remix-run+router@1.7.2/node_modules/@remix-run/router/router.ts:2764:22) {
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] clientVersion: '5.1.1'
2023-08-23T07:06:39.575 app[3d8d114f003089] nrt [info] }
2023-08-23T07:06:39.591 app[3d8d114f003089] nrt [info] POST /?index=&_data=routes%2F_index 500 - - 162.386 ms

LiteFS による POSTの内部プロクシがうまく動いてないみたい?

Coji MizoguchiCoji Mizoguchi

とりあえずコスト発生しないように1台に減らす

$ fly scale count 1
App 'coji-litefs-speedrun' is going to be scaled according to this plan:
  -9 machines for group 'app' on region 'nrt' of size 'shared-cpu-1x'
? Scale app coji-litefs-speedrun? Yes
Executing scale plan
  Destroyed 5683d3d0b12048 group:app region:nrt size:shared-cpu-1x
  Destroyed 32874ddda303e8 group:app region:nrt size:shared-cpu-1x
  Destroyed e784e666fe2e58 group:app region:nrt size:shared-cpu-1x
  Destroyed 17812deb5e5189 group:app region:nrt size:shared-cpu-1x
  Destroyed 1781774a22e489 group:app region:nrt size:shared-cpu-1x
  Destroyed 9080e445a1e928 group:app region:nrt size:shared-cpu-1x
  Destroyed 3d8d114f003089 group:app region:nrt size:shared-cpu-1x
  Destroyed e784773ce62683 group:app region:nrt size:shared-cpu-1x
  Destroyed 3287332a626385 group:app region:nrt size:shared-cpu-1x
Coji MizoguchiCoji Mizoguchi

machine 2台構成でそれぞれのデータベースの内容確認したかったけど
fly ssh console で machine 指定どうやるかわからない

Coji MizoguchiCoji Mizoguchi

2台構成にして、同期しての読み取りができてるところをまず確認する。

fly scale count 2 --with-new-volumes
Coji MizoguchiCoji Mizoguchi

まず1台目につないで SELECT count で現状確認

fly ssh console -s
? Select VM: nrt: 5683779b6e968e fdaa:2:5a0e:a7b:db52:cce9:6fc2:2 hidden-forest-1659
Connecting to fdaa:2:5a0e:a7b:db52:cce9:6fc2:2... complete
root@5683779b6e968e:/app# sqlite3 /litefs/data.db
SQLite version 3.40.1 2022-12-28 14:03:47
Enter ".help" for usage hints.
sqlite> select count(*) from "Message";
8

ここでメッセージを1件post。(エラーがでたらリトライ)
https://coji-litefs-speedrun.fly.dev/?index

もう一度同じ machine で確認

fly ssh console -s
? Select VM: nrt: 148e441f166789 fdaa:2:5a0e:a7b:b4f1:74f3:be6f:2 long-resonance-9509
Connecting to fdaa:2:5a0e:a7b:b4f1:74f3:be6f:2... complete
root@148e441f166789:/app# sqlite3 /litefs/data.db
SQLite version 3.40.1 2022-12-28 14:03:47
Enter ".help" for usage hints.
sqlite> select count(*) from "Message";
9

8 => 9 で1件増えてる。ok

次、もう一台のほうの machine で確認

fly ssh console -s
? Select VM: nrt: 148e441f166789 fdaa:2:5a0e:a7b:b4f1:74f3:be6f:2 long-resonance-9509
Connecting to fdaa:2:5a0e:a7b:b4f1:74f3:be6f:2... complete
root@148e441f166789:/app# sqlite3 /litefs/data.db
SQLite version 3.40.1 2022-12-28 14:03:47
Enter ".help" for usage hints.
sqlite> select count(*) from "Message";
9

こっちも 9 に増えてるので OK!

Coji MizoguchiCoji Mizoguchi

原因判明。
Dockerfile でサーバ起動直前に設定した環境変数 PORT と、fly.toml で設定した [env] PORT= の数値とが違っていたためでした。

fly proxy 向けに露出させるポート(=fly.toml の internal_port) は litefs.yml の proxy: addr: で指定するポート番号で、アプリ自体のポート番号は litefs.yml の proxy: target に指定するポート番号に合わせます。

以下のようになる。

ENV INTERNAL_PORT="8080"
ENV PORT="3000"
fly.toml
# Dockerfile で環境変数を設定してるから不要。2重に設定するとトラブルの元。
- [env]
-  PORT = "8080"

[http_service]
  internal_port = 8080
/etc/litefs.yml
proxy:
  addr: ':${INTERNAL_PORT}'
  target: 'localhost:${PORT}'
  db: '${DATABASE_FILENAME}'
Coji MizoguchiCoji Mizoguchi

最終的にfly scale count 2 で2台構成で、書き込みは自動プロクシでプライマリーDBに、参照はそれぞれローカルDBからという構成ができました。負荷に応じてオートスケール(最小ゼロ構成でコストゼロ)にもできるし、SQLiteなんで読込みめっちゃはやい!

https://coji-litefs-speedrun.fly.dev/

ソースコードはこちら。
https://github.com/coji/coji-litefs-speedrun

ちなみに flyio にビルトインされている LiteFS Cloud (現在無料)で、過去30日分のバックアップが自動で取得されてて、5分間隔の任意のタイミングにリカバリやスナップショットダウンロードも簡単にできます。最高ですね!

次は、python で動かす別 machine からもこの DB ファイルを参照専用で読めるようにしたい。sentence trasformer で embedding にしたいんです。というわけで続きは明日。