iTranslated by AI
Running Your Own Web Speed Hackathon 2022
I thought the mechanism of CyberAgent's Web Speed Hackathon 2022 was impressive (especially the Leaderboard part automated with GitHub Actions), so I'm writing down how to play with it in your own environment.
What is Web Speed Hackathon?
It is likely something like a front-end version of ISUCON.
Participants submit a GitHub Issue with a URL they have deployed to Heroku or elsewhere, and compete for scores calculated based on Google Lighthouse results returned by a bot.
Similar to ISUCON, the web application is filled with anti-patterns to slow it down. I thought it was great that the challenge is designed so that it can be deployed on Heroku's free tier and that scores can basically be improved within the scope of a front-end engineer's skills (though it is also possible to optimize infrastructure and back-end implementation).
How the Leaderboard Works
- When a submission is made via GitHub Issues, Request is executed.
- Processing is handed over to Scoring, which performs visual regression testing (VRT) and scoring.
- The results are posted as a comment on the Issue.
- The Leaderboard is updated automatically.

Setup
First, fork CyberAgentHack/web-speed-hackathon-2022-leaderboard.
As it is, the scorer won't run because it's outside the event period, so we'll comment it out.
--- a/.github/workflows/scoring.yml
+++ b/.github/workflows/scoring.yml
@@ -39,15 +39,7 @@ jobs:
with:
result-encoding: string
script: |
- const payload = require('/tmp/payload.json');
-
- const startAt = new Date('2022-08-04T18:00:00.000+09:00');
- const endAt = new Date('2022-08-05T20:00:00.000+09:00');
- const requestedAt = new Date(payload.request_time);
-
- if (requestedAt < startAt || endAt < requestedAt) {
- return 'closed';
- }
+ // force oneced for debugging
return 'opened';
vrt:
runs-on: ubuntu-20.04
Specifying Measurement URLs
Set a secret in /settings/secrets/actions so that the list of paths for which to generate Lighthouse scores can be read by the Action as the value WSH_SCORING_TARGET_PATHS.
["/2022-08-03","/races/00554e5d-24bb-4839-a9b0-9295c6026ff8/race-card","/races/7c6f9e84-c59d-4210-87c2-151e866cee43/odds","/races/4ec52cb6-9e1d-4f2b-8efe-4c267c6ce4da/odds","/races/931e5cdc-43b8-4545-9479-a10881334331/result"]
I obtained this from the existing Action execution results.
Modifying the Application
Get the source code from the repository above and deploy it to Heroku yourself.
Calculating the Score
When you create an issue in your forked web-speed-hackathon-2022-leaderboard repository, it's in a form format where you can enter the URL you deployed yourself.

Creating the issue triggers the GitHub Action, and a comment is added once scoring is successful.
From there, it's just a matter of improving the score.
Tips: Running VRT Locally
Since running the visual regression test (VRT) on GitHub Actions every time is time-consuming, I thought that running it locally would speed up the feedback loop.
The VRT mechanism works by capturing screens with Puppeteer and calculating diffs with reg-cli. Therefore, you can run this check on your locally developing application by executing:
yarn vrt:capture --url http://localhost:3000/ && yarn vrt:detect
Handling Environment Differences
However, there were differences between the screenshots taken in the GitHub Actions Linux environment and those taken in my local macOS environment, even without any changes.
Since differences are inevitable due to variations in system fonts and rendering APIs, I decided to take a "correct" set of data on macOS once and overwrite the expected value files with it to fix this.
yarn vrt:capture --url http://localhost:3000/
cp -a scripts/vrt/tmp/actual scripts/vrt/expected
The downside is that I might not notice failures that occur only in GitHub Actions, but so far that hasn't happened in my experiments.
Shortening Execution Time
Shorten completion time by calling captures for each page without waiting for each to finish.
--- a/scripts/vrt/src/index.ts
+++ b/scripts/vrt/src/index.ts
@@ -35,7 +35,7 @@ async function main() {
await fs.ensureDir(exportPath);
for (const viewport of viewportList) {
- for (const page of pageList) {
+ pageList.forEach(async (page) => {
const url = new URL(page.path, baseUrl).href;
const buffer = await captureScreenshot({
url,
Perhaps because the interval between Puppeteer process calls has become shorter, memory usage during operation has increased.
Discussion