iTranslated by AI
Investigation and Experimentation with OpenTestFactory: Toward a Standard Test Execution Mechanism
Purpose
This document describes the results of an investigation and experiments regarding OpenTestFactory, which aims to be a standard mechanism for planning, executing, and publishing test results.
The actual OpenTestFactory test execution environment constructed can be found below.
https://github.com/mima3/research_tms/tree/main/annex/opentestfactory
Overview of OpenTestFactory
In OpenTestFactory, you define test execution jobs by writing PEaC (Planned Execution as Code) in YAML or JSON.
This file is called a PEaC (Planned Execution as Code) file, where you describe jobs and the commands to be executed within them, as shown below:
metadata:
name: test-agent
jobs:
unit_test:
runs-on: [linux,pytest]
steps:
- name: Clone source code
uses: "actions/checkout@v2"
with:
repository: "https://github.com/mima3/test_asyncio"
ref: "main"
- name: Initial setup
run: pip install --no-input -q aiofiles
- name: Run tests
run: |
cd test_asyncio/py313
pytest -q --junitxml=junit.xml -o junit_family=xunit2 -o junit_duration_report=call test
- name: Upload results
run: |
echo "::upload type=application/xml,name=junit.xml::`pwd`/test_asyncio/py313/junit.xml"
For detailed syntax, please refer to the Workflow syntax.
The following diagram illustrates how PEaC is executed.

The PEaC file is sent to the Receptionist service within the OpenTestFactory Orchestrator.
The Receptionist service accepts the PEaC, and the Arranger selects the execution target Agent based on runs-on, tags, and namespaces.
After executing the job content, the Agent sends the results to the Results publishers.
For more details, please refer to the following:
OpenTestFactory Experiments
The following describes an experiment conducted by setting up an OpenTestFactory environment in a Docker environment.
https://github.com/mima3/research_tms/tree/main/annex/opentestfactory/docker
Environment Creation
In this experiment, the following containers were started:
- orchestrator(opentestfactory/allinone:latest)
- 7774 receptionist: A service that accepts workflows
- 7775 observer: A service that monitors workflow progress
- 7776 killswitch: A service that stops workflows
- 7796 insightcollector: A service that aggregates execution events to create summaries
- 38368 eventbus: The foundation for event distribution (pub/sub) between services
- 34537 localstore: Storage for attachments uploaded by Agents
- 24368 agent channel: The hub between the Arranger and Agents. It passes jobs to Agents that match the
runs-on,tags, andnamespaces.
- pytest-agent
- Agent with Python and pytest installed
- playwright-agent
- Environment with Python and Playwright (Node.js) installed
The agent executes the following command upon startup to register itself with the orchestrator:
opentf-agent --tags linux,pytest --host http://orchestrator --port 24368
--token "$TOKEN" --verify false --script_path /tmp
Installation and Configuration of opentf-ctl
opentf-ctl is the command-line tool for OpenTestFactory that performs workflow registration, result checking, and attachment retrieval for a specified OpenTestFactory Orchestrator.
First, prepare a configuration file. This contains connection and token information.
#
# Generated opentfconfig
# (generated by opentf-ctl version 0.54.0)
#
apiVersion: opentestfactory.org/v1alpha1
contexts:
- context:
orchestrator: default
user: default
name: default
current-context: default
kind: CtlConfig
orchestrators:
- name: default
orchestrator:
insecure-skip-tls-verify: false
server: http://localhost
services:
agentchannel:
port: 24368
eventbus:
port: 38368
insightcollector:
port: 7796
killswitch:
port: 7776
localstore:
port: 34537
observer:
port: 7775
qualitygate:
port: 12312
receptionist:
port: 7774
users:
- name: default
user:
token: eyJhbGciOiJSUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJ5b3VyIGNvbXBhbnkiLCJzdWIiOiJ5b3VyIG5hbWUifQ.FL_1O3Z6X5oDwl9cs9njKu6xnL6wkjQ1-BnZZoDDr1OExBWZZR0BrfUrim9mel08WaKQz0cZBaEMdikSQ3GkqJKtTpMxUPHvMtkk4RAa3DgiuC1oH-hidhHPe8mVRfWipP33L1mWuRfjdcX0vMAsKU1v0hRMjUyl-l_1Gj4EcSsXeZZmuFK1HDDsL0PtQXt5b0gxpZXbZOZa2tTajC80uvbCJ0uQZImEPNhvabh90T-qMw_tkgQmEPMyFsiZ-tw7oCl7tUoeukgQIt1uR2ul77cMZhVzrVnWL2l7w70cyH8M0S7sCQjrZbT2O_IA5-GD9swummDq70J_oWuGR5NhynQi2lZh0vCoRlp1m9ULXFhQ1-60l2SgCne2FV1pTl3Etr9Qw0hqVGmeANTI78XpMTtgHJrUUCLfLLg6WcCtoPAxfwNeAkJX62GKQU8rzQLSLhHEIqjCwyIxJCQaVAIEeB9cPXWdAo_5iFosxBI5VApDMOTBBffxbaJuVErhkcxUa_nPfjF7FPDlVZESHON1XGmjakFQhupkRzMhsf9UDeHcZCI63mDvT9h_cB7POAHrdCkeC0ufVQMv5j4NutZPsFOkzkvbIIeD1ol0LH0M2I1ZjQd7Gpzha51NOTFAVkXGc6p4blVrxKnrKjXgogvPNBvzVVguof-oAqX_eLSg1j8
The TOKEN specified here is created from the trusted_key.pem and trusted_key.pub used by the orchestrator. For the actual creation method, refer to make_token.py.
For example, the command to execute a workflow would be as follows:
opentf-ctl --opentfconfig ./config run workflow workflow/hello.yml
Simple Workflow Execution Example
The following shows an example of executing a simple workflow.
metadata:
name: hello-from-agent
jobs:
echo1:
runs-on: [linux]
steps:
- run: echo "Hello from OpenTestFactory agent 1"
- run: uname -a
echo2:
runs-on: [linux]
steps:
- run: echo "Hello from OpenTestFactory agent 2"
- run: uname -a
This workflow consists of two jobs that simply execute echo and uname -a targeting linux. To run this, use the run workflow command.
% pipenv shell # Enter the python virtual environment
% opentf-ctl --opentfconfig ./config run workflow workflow/hello.yml
Workflow 7916bfb1-11c9-41d3-98c3-886abc425e5f is running.
To check the status of the workflow execution, use the following commands. First, to see a list of recently executed workflows, run the get workflows command.
% opentf-ctl --opentfconfig ./config get workflows
WORKFLOW_ID STATUS NAME
a6dce077-4b68-4511-a823-ef7278c7a00b DONE test-agent
fdc9f83c-4502-4336-b475-cb9c918eac4d DONE hello-from-agent
57088f1f-e5cb-4eb1-bfd7-f313e51c100c DONE Playwright (Node) with provider demo
7916bfb1-11c9-41d3-98c3-886abc425e5f DONE hello-from-agent
You can confirm that 7916bfb1-11c9-41d3-98c3-886abc425e5f has been executed. To retrieve the execution logs, run get workflow <WORKFLOW_ID>.
% opentf-ctl --opentfconfig ./config get workflow 7916bfb1-11c9-41d3-98c3-886abc425e5f
Workflow hello-from-agent
(running in namespace "default")
[2025-09-21T16:31:25] [Job 61d2eba7-ed93-4278-8a43-d3bd2d4faaeb] Requesting execution environment providing "linux" in namespace "default" for job "echo1"
[2025-09-21T16:31:25] [Job 96e2a315-2715-45aa-ba78-0022f4f8f43b] Requesting execution environment providing "linux" in namespace "default" for job "echo2"
[2025-09-21T16:31:25] [Job 61d2eba7-ed93-4278-8a43-d3bd2d4faaeb] Running command: echo "Hello fro...
[2025-09-21T16:31:25] [Job 61d2eba7-ed93-4278-8a43-d3bd2d4faaeb] Hello from OpenTestFactory agent 1
[2025-09-21T16:31:25] [Job 61d2eba7-ed93-4278-8a43-d3bd2d4faaeb] Running command: uname -a
[2025-09-21T16:31:25] [Job 61d2eba7-ed93-4278-8a43-d3bd2d4faaeb] Linux 9889c29332ef 6.10.14-linuxkit #1 SMP PREEMPT_DYNAMIC Wed Sep 3 15:37:39 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
[2025-09-21T16:31:25] [Job 61d2eba7-ed93-4278-8a43-d3bd2d4faaeb] Releasing execution environment for job "echo1"
[2025-09-21T16:31:25] [Job 96e2a315-2715-45aa-ba78-0022f4f8f43b] Running command: echo "Hello fro...
[2025-09-21T16:31:30] [Job 96e2a315-2715-45aa-ba78-0022f4f8f43b] Hello from OpenTestFactory agent 2
[2025-09-21T16:31:30] [Job 96e2a315-2715-45aa-ba78-0022f4f8f43b] Running command: uname -a
[2025-09-21T16:31:30] [Job 96e2a315-2715-45aa-ba78-0022f4f8f43b] Linux 9889c29332ef 6.10.14-linuxkit #1 SMP PREEMPT_DYNAMIC Wed Sep 3 15:37:39 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
[2025-09-21T16:31:30] [Job 96e2a315-2715-45aa-ba78-0022f4f8f43b] Releasing execution environment for job "echo2"
Workflow completed successfully.
You can confirm that the commands specified in the jobs are being executed.
Playwright Test Execution Example
Here is an example of executing tests with Playwright. First, create a workflow like the following:
apiVersion: opentestfactory.org/v1
kind: Workflow
metadata:
name: "Playwright (Node) with provider demo"
jobs:
e2e:
runs-on: [linux,playwright]
steps:
- name: Clone source code
uses: "actions/checkout@v2"
with:
repository: "https://github.com/mima3/research_tms"
ref: "main"
- name: Install deps
variables:
PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD: "0"
run: |
cd research_tms/annex/opentestfactory/docker/playwright/workspace
ls -lt
pwd
echo ${OPENTF_WORKSPACE}
npm ci || npm i
npx playwright install --with-deps
# Run with official Playwright provider
- uses: playwright/npx@v1
with:
test: ${OPENTF_WORKSPACE}/research_tms/annex/opentestfactory/docker/playwright/workspace/tests/*.spec.ts
reporters: ['junit','html'] # Automatically attach pw_junit_report.xml and index.html
working-directory: ${OPENTF_WORKSPACE}/research_tms/annex/opentestfactory/docker/playwright/workspace
In this example, we use the Actions provider to retrieve the code from GitHub, and for Playwright, we use the Playwright provider to run the tests.
To execute the workflow, use the run workflow command:
% opentf-ctl --opentfconfig ./config run workflow workflow/playwright.yml
Workflow e7676b27-bff3-4f68-91e4-ccb9b0d3360e is running.
To check the progress, execute the get workflow <WORKFLOW_ID> command:
% opentf-ctl --opentfconfig ./config get workflow e7676b27-bff3-4f68-91e4-ccb9b0d3360e
This test takes time because of tasks like browser installation. During this test, attachments are created. To copy them to your local machine, run the cp <WORKFLOW_ID> <LOCAL_FOLDER> command:
% opentf-ctl --opentfconfig ./config cp "e7676b27-bff3-4f68-91e4-ccb9b0d3360e:*" ./tmp
Attachment index.html (389d5947-1c77-426a-b214-116b54508add) is downloaded at tmp/e2e/3_*.spec.ts/index.html.
Attachment pw_junit_report.xml (df49af20-3ae9-4b0d-84f1-6d9cc663ef0c) is downloaded at tmp/e2e/3_*.spec.ts/pw_junit_report.xml.
Attachment executionlog.txt (ce790a7e-bc3c-4818-b7be-9e829a217451) is downloaded at tmp/executionlog.txt.
Attachment executionreport.html (b678c2c2-ba29-425b-a480-a73f1208e3ff) is downloaded at tmp/executionreport.html.
Attachment executionreport.xml (8f9cdaf4-1cae-4ce0-83ab-587085e08e8e) is downloaded at tmp/executionreport.xml.
Several files will be downloaded, and among them, executionreport.html provides a clear visualization of the workflow's test results using graphs and other elements.

pytest Test Execution Example
You can also run tests using pytest as follows.
metadata:
name: test-agent
jobs:
unit_test:
runs-on: [linux,pytest]
steps:
- name: Clone source code
uses: "actions/checkout@v2"
with:
repository: "https://github.com/mima3/test_asyncio"
ref: "main"
- name: Initial setup
run: pip install --no-input -q aiofiles
- name: Run tests
run: |
cd test_asyncio/py313
pytest -q --junitxml=junit.xml -o junit_family=xunit2 -o junit_duration_report=call test
- name: Upload results
run: |
echo "::upload type=application/xml,name=junit.xml::`pwd`/test_asyncio/py313/junit.xml"
Although the tests are executed and the results are uploaded in JUnit format, executionreport.html is not generated. This is because pytest has not been created as a Provider. If not supported by Providers, the tests themselves can still be executed, but the corresponding report will not be generated. Therefore, you would either need to create a Provider yourself or give up on having the report.
Summary
In this article, I summarized the results of investigating and verifying OpenTestFactory. Through these experiments, I confirmed that it functions as a test execution platform; however, I could not find any significant advantages over other CI/CD tools.
Furthermore, due to its low recognition, collecting information and resolving issues when they occur was quite difficult, so a cautious decision would be necessary for its active adoption.
However, it remains worth considering for use cases that prioritize integration with the test management tool called SquashTM or the minimization of test execution startup costs.
Discussion