iTranslated by AI

The content below is an AI-generated translation. This is an experimental feature, and may contain errors. View original article
👋

An Editor-Agnostic Tool for In-Editor Test Results

に公開

I have created an LSP server and its peripheral tools that can display test errors on the editor.

https://github.com/kbwo/testing-language-server

⚠️ Notice

The usage of the tools described in this article is outdated and has not kept up with recent updates. If you plan to use them, please refer to the repository's README or create an Issue as appropriate.

Motivation

I saw this article a few months ago.

Improving Frontend Development Efficiency with Wallaby.js - Findy Tech Blog https://tech.findy.co.jp/entry/2024/04/15/100523

The development experience where test results are reflected in real-time on the editor looks very attractive. Since I usually use Neovim, I thought I wanted to do something similar in Neovim as soon as I saw this article. Wallaby.js didn't seem to have an extension for Neovim, so it was ruled out as an option at that point. However, when I looked into Wallaby.js in detail, it seemed that the tool was finely tuned for JavaScript testing frameworks.
A tool that is finely tuned for specific tools can be excellent in performance and usability if the conditions are right.
However, I wanted a "more versatile framework that can be used or extended without being tied to a specific language or editor."

Similarly, there is a tool that realizes the feature of reflecting test results on the editor:
https://github.com/nvim-neotest/neotest
This met my desire for something "versatile, usable for any language, or extendable."
However, this also had a specification I didn't like: it depends on Neovim. My main editor is Neovim, and I rarely use other editors. However, I am the type who doesn't want to be fixated on Neovim and wants to actively try other editors if there is an opportunity. Just as Wallaby.js is tied to VS Code, neotest being tied to Neovim did not fit my philosophy.

Therefore, I decided to create a tool that is not tied to an editor.
When considering implementing something standardized and editor-agnostic, LSP was the first thing that came to mind. There are already several implementation examples of LSP servers with the "versatility" I am aiming for, as listed below:

If it's an LSP that can provide functionality without being tied to an editor, almost identical features can be realized in both Neovim and VS Code. I also thought that displaying test errors as LSP diagnostics would be convenient enough, so I tried making it.

What I Made

VS Code

coc.nvim

In this way, when you save after editing in the editor, it executes tests and displays errors by providing diagnostics if there are any errors.

Currently, it supports the following test frameworks:

  • cargo test
  • cargo nextest
  • jest
  • deno test
  • go test
  • phpunit
  • vitest

How to Install

Installing the LSP Server

Since binary distribution is not yet available, you need to install it using cargo install.
If you cannot run the cargo command, please install rustup to enable the cargo command.

cargo install testing-language-server
cargo install testing-ls-adapter

Editor-specific Settings

VS Code

Install the extension from https://marketplace.visualstudio.com/items?itemName=kbwo.testing-language-server.

coc.nvim

Install the extension with :CocInstall coc-testing-ls.

Neovim builtin LSP (nvim-lspconfig)

Write a configuration like the one below. I will eventually create a PR for nvim-lspconfig so that this configuration won't be necessary. Reference: https://github.com/kbwo/testing-language-server/tree/main/demo#readme
Note: Currently, support for Neovim builtin LSP is the weakest compared to others, supporting only diagnostics through real-time test execution. As mentioned in the "Future Plans" section, I plan to provide a plugin in the future to enable other useful commands.

local lspconfig = require('lspconfig')
local configs = require('lspconfig.configs')
local util = require "lspconfig/util"

configs.testing_ls = {
  default_config = {
    cmd = { "testing-language-server" },
    filetypes = { "rust" },
    root_dir = util.root_pattern(".git", "Cargo.toml"),
      init_options = {
        enable = true,
        fileTypes = {"rust"},
        adapterCommand = {
		  -- Refer to "Configuring tests for each project"
		  -- The settings here assume a Rust project
          rust = {
            {
              path = "testing-ls-adapter",
              extra_arg = { "--test-kind=cargo-test", "--workspace" },
              include = { "/demo/**/src/**/*.rs"},
              exclude = { "/**/target/**"},
            }
          }
        },
        enableWorkspaceDiagnostics = true,
        trace = {
          server = "verbose"
        }
      }
  },
  docs = {
    description = [[
      https://github.com/kbwo/testing-language-server

      Language Server for real-time testing.
    ]],
  },
}

lspconfig.testing_ls.setup{}

Configuring tests for each project

By describing project-specific settings in each editor environment, such as .vscode/settings.json or .vim/coc-settings.json, you can run tests accordingly. This LSP server will not work as expected unless these settings are configured correctly.

It is faster to look at actual examples, so please refer to the following:
VS Code configuration example:
https://github.com/kbwo/testing-language-server/blob/main/demo/.vscode/settings.json

coc.nvim configuration examples:
The first is an example of using testing-language-server directly, and the second is an example of using it via coc-testing-ls.
https://github.com/kbwo/testing-language-server/blob/main/demo/.vim/coc-settings.json
https://github.com/kbwo/testing-language-server/blob/main/.vim/coc-settings.json

In this LSP server, the adapterCommand setting is the key, as shown below.

"adapterCommand": {
	  "cargo-test": [
		{
		  "path": "testing-ls-adapter",
		  "extra_arg": ["--test-kind=cargo-test"],
		  "include": ["/**/src/**/*.rs"],
		  "exclude": ["/**/target/**"]
		}
	  ],
	  "cargo-nextest": [
		{
		  "path": "testing-ls-adapter",
		  "extra_arg": ["--test-kind=cargo-nextest"],
		  "include": ["/**/src/**/*.rs"],
		  "exclude": ["/**/target/**"]
		}
	  ],
	  "jest": [
		{
		  "path": "testing-ls-adapter",
		  "extra_arg": ["--test-kind=jest"],
		  "include": ["/jest/*.js"],
		  "exclude": ["/jest/**/node_modules/**/*"]
		}
	  ],
	  "vitest": [
		{
		  "path": "testing-ls-adapter",
		  "extra_arg": ["--test-kind=vitest"],
		  "include": ["/vitest/*.test.ts", "/vitest/config/**/*.test.ts"],
		  "exclude": ["/vitest/**/node_modules/**/*"]
		}
	  ],
	  "deno": [
		{
		  "path": "testing-ls-adapter",
		  "extra_arg": ["--test-kind=deno"],
		  "include": ["/deno/*.ts"],
		  "exclude": []
		}
	  ],
	  "go": [
		{
		  "path": "testing-ls-adapter",
		  "extra_arg": ["--test-kind=go-test"],
		  "include": ["/**/*.go"],
		  "exclude": []
		}
	  ],
	  "phpunit": [
		{
		  "path": "testing-ls-adapter",
		  "extra_arg": ["--test-kind=phpunit"],
		  "include": ["/**/*Test.php"],
		  "exclude": ["/phpunit/vendor/**/*.php"]
		}
	  ]
  }

This defines how to run tests for various file types within the project.
Each key (the cargo-test or jest parts) is an arbitrary key to identify the testing method, so you can configure it as you like.
Let's look at the properties of the configuration object:

  • path: The path to the adapter's executable.
  • extra_arg: An array of additional arguments to be passed to the adapter.
    • What is important here is --test-kind=<runner>, which tells the adapter what type of test to handle.
      • Currently, only the types listed in the example above are supported.
    • This might not be necessary if you use another adapter instead of testing-ls-adapter.
      • For more details, refer to the Design section.
  • include: An array of glob patterns specifying files to be included as test targets.
  • exclude: An array of glob patterns specifying files or directories to be excluded from test targets.
    • exclude always takes precedence over include.

Design

Tool Configuration

The configuration of the tool I created is divided into three parts: the LSP server (testing-language-server), the adapter (testing-ls-adapter), and extensions for each editor (such as VS Code or coc.nvim). I will explain the role each one plays.

testing-language-server:

  • The main server that implements the Language Server Protocol (LSP).
  • Responsible for communicating with editors and IDEs, providing functionality to send test-related editor commands and diagnostics to the editor.

testing-ls-adapter:

  • This is an adapter that acts as a bridge between specific test frameworks/tools and the testing-language-server.
  • A CLI that provides implementations specialized for each test framework (e.g., Cargo Test, Jest, Vitest, Deno, Go Test, PHPUnit).

Extensions for each editor:

  • LSP clients responsible for communicating with the testing-language-server.
  • In addition to providing diagnostics, they offer convenient commands for using testing-language-server.
    • Manual execution of tests at the file or workspace level, clearing unnecessary diagnostics, etc.

Why wasn't the adapter implementation included in the LSP server?

If I just wanted to implement the features provided by this tool, I could have built the specialized implementations for each test framework directly into the LSP server and had it operate as a single unit. However, the reason I separated the implementation is that I was influenced by the design of neotest, which I referred to during the conceptual stage of this tool, as mentioned in the "Motivation" section.

In neotest, the core functionality and the support for each test framework are also kept separate. An adapter can support any test framework as long as it implements the interface defined by the core.

In this tool, I achieve the same design by treating the adapter as a CLI. Any adapter that implements a specific interface can be configured via adapterCommand. (There is also documentation on the interface that must be implemented, although it is not yet fully detailed.)

By separating the adapter and designing it as a CLI, anyone can implement an adapter using their preferred language or tool. While I currently handle support for several test frameworks through testing-ls-adapter, you aren't strictly required to use it; you are free to write your own adapter. I'm not sure if this level of extensibility was truly necessary since I don't know if anyone else will use this tool, but at least for me, it's beneficial to be able to choose libraries freely without being tied to a specific language when implementing an adapter.

Incidentally, when thinking about how to make the adapter implementation independent from the LSP server and language-agnostic, I also considered approaches using RPC or Wasm. After careful consideration, I concluded that a CLI was the simplest way to achieve what I wanted.

Future Plans

I will continue dogfooding and improving areas that I find unsatisfactory.

The improvements planned for now are as follows:

  • Support for more test frameworks
  • Enhancing extensions for each editor
  • More efficient test execution and caching features
  • Enhancing documentation
    • Actually, you can pass environment variables when executing test commands like jest via testing-ls-adapter, but this is an undocumented feature.
    • I often forget what can be configured myself, so I will definitely document it.

Update

I have received more feedback than expected. Thank you very much.
Since this LSP server and its peripheral tools are not yet what you could call "well-tested," I would appreciate it if you could actively create issues if you encounter any bugs or inconveniences while using them.

Discussion