iTranslated by AI
Simplifying PyTorch Environment Setup with uv: A Unified Approach for Cross-Platform and CPU/GPU Support
3-line Summary
- In PyTorch environment setup, procedures vary by OS (Mac/Linux) and GPU presence, making it difficult to unify across team development.
- By combining
uv'soptional-dependenciesandMakefile, we built a system that automatically determines environmental differences and switches the installation source. - With just one command,
make install, the optimal PyTorch (CPU/GPU version) is automatically installed on macOS/Linux and any CPU/GPU environment, simplifying environment setup.
The sample code for this article is available in the following repository.
1. Introduction: Common Issues in PyTorch Environment Setup
In machine learning projects, especially development using PyTorch, it's not uncommon to run into trouble with environment setup.
- Development members' OSs are scattered (Mac, Linux), making it impossible to unify procedures.
- The PyTorch version to install varies by environment—for example, the local PC is the CPU version, while the experimental server is the GPU version.
- As a result, environment-specific instruction manuals are required, leading to longer setup times and a higher likelihood of errors.
In this article, I will introduce a method to solve these issues using the Python package management tool uv, which is becoming the modern de facto standard.
Target Audience
- Those developing machine learning projects using PyTorch.
- Those who want to unify environment setup across a team.
- Those who know the basic usage of
uv.
*Note: Please refer to the official documentation for instructions on how to install uv itself.
Prerequisites
To execute the steps in this article, you must have the following installed:
-
uv(≥ 0.5.3) -
makecommand (pre-installed on Linux/macOS) - (For GPU environments) NVIDIA CUDA Toolkit 12.6
Goal of this Article
The goal of this article is to create a state where simply executing a single command, make install, automatically installs the optimal PyTorch (CPU or GPU version) regardless of the environment.
Time required: Approx. 15 minutes (excluding uv installation)
For example, in team development, "Person A's macOS/CPU machine" and "Person B's Linux/GPU machine" can use the exact same repository and command to build a PyTorch environment optimized for their respective machines.
2. Overview of Automating PyTorch Installation with uv
The overall architecture of the system we are building is shown in the following flow diagram.
By combining uv and Makefile, the following processes are automated:
- The user executes the
make installcommand. - The Makefile determines if an NVIDIA GPU (CUDA) exists in the execution environment.
- Based on the result, it instructs
uvto use either the "CPU version (--extra=cpu)" or the "GPU version (--extra=gpu)". - Based on the current OS (Mac/Linux) and the specified type,
uvautomatically selects the optimal PyTorch download source (Index) configured inpyproject.toml. - The appropriate PyTorch is installed from the selected download source.
The core of this mechanism lies in the definition of dependencies within pyproject.toml. The next section will explain the specific configuration method.
3. Why use uv?
uv is a Python package management tool, similar to pip, poetry, or pipenv. Its main feature is its processing speed due to being written in Rust, and it can improve the development experience while maintaining compatibility with existing pip and pip-tools workflows.
The reason for using uv in this article is that while traditional tools like pipenv required splitting complex dependencies into multiple files, uv allows managing them in a single file.
Challenges with package management in pipenv
For packages like PyTorch, where the download source (index URL) differs between CPU and GPU versions, pipenv faced significant challenges. Since pipenv cannot describe settings to switch download sources based on the OS within a Pipfile, workarounds like managing separate Pipfile.lock files for Linux and macOS were necessary. This leads to increased repository complexity and management costs.
Solution with uv
On the other hand, uv allows you to write settings in pyproject.toml to dynamically switch download sources for each OS or environment using markers (conditional specifications based on environment variables, etc.).
This makes it possible to include dependency information for all environments—including macOS/Linux and CPU/GPU—within a single uv.lock file.
This capability is why uv was chosen for this article.
4. Cross-platform PyTorch Installation Procedure
From here, I will explain the specific configuration method in three steps.
Step 1: Define Dependencies in pyproject.toml
You will describe the settings in pyproject.toml to instruct uv on "which package to download from where for which environment." This is a crucial setting for achieving cross-platform installation.
The overall configuration is shown below, followed by a detailed explanation of each section.
[project]
name = "blog-install-pytorch-cpu-gpu"
version = "0.1.0"
description = "Sample code for a blog explaining how to install CPU/GPU PyTorch"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"numpy~=2.2.5",
"loguru~=0.7.3"
]
# --- Main part of this article starts here ---
[project.optional-dependencies]
cpu = ["torch==2.7.0"]
gpu = ["torch==2.7.0"]
[tool.uv]
# Prevents both "cpu" and "gpu" from being specified at the same time
conflicts = [[{ extra = "cpu" }, { extra = "gpu" }]]
[tool.uv.sources]
torch = [
# macOS (CPU version) is obtained from PyPI
{ index = "pytorch-cpu-mac", extra = "cpu", marker = "platform_system == 'Darwin'" },
# Linux (CPU version) is obtained from the PyTorch-specific index
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" },
# GPU version is obtained from the dedicated index for all OSs
{ index = "pytorch-gpu", extra = "gpu" },
]
[[tool.uv.index]]
name = "pytorch-cpu-mac"
url = "https://pypi.python.org/simple"
explicit = true # This index is used only when explicitly specified
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "pytorch-gpu"
url = "https://download.pytorch.org/whl/cu126" # Modification is needed if using a different CUDA version
explicit = true
Explanation: The Role of Each Section
-
[project.optional-dependencies]
Here, we define two additional dependency groups ("extras"):cpuandgpu.
Whiletorchitself is a required library, by deliberately placing it inoptional-dependencies, we allow the choice betweencpuandgpuat the time of installation. -
[tool.uv.sources]
Specifies where to download thetorchpackage based on conditions.- If
extra = "cpu"andplatform_system == 'Darwin'(specified bymarker), the index namedpytorch-cpu-macis used. - If
extra = "cpu"and the OS is not macOS, the index namedpytorch-cpuis used. - If
extra = "gpu", the index namedpytorch-gpuis used.
In this way, the combination ofextraandmarkerenables switching the download source according to the environment.
- If
-
[[tool.uv.index]]
Defines the specific URLs for each index specified in[tool.uv.sources].-
pytorch-cpu-mac: Standard PyPI (https://pypi.python.org/simple) -
pytorch-cpu: PyTorch's dedicated repository for the CPU version -
pytorch-gpu: PyTorch's dedicated repository for the GPU (CUDA 12.6) version
explicit = trueensures thatuvdoes not automatically search this index; it is only used when explicitly specified in[tool.uv.sources].
-
Why put torch in optional-dependencies?
Although torch is a required library, the reason for writing it in optional-dependencies is to dynamically select "which version to obtain from where" during installation. If placed in standard dependencies, this flexibility is lost.
With this configuration, uv will automatically select the CPU index if --extra=cpu is specified in the installation command, or the GPU index if --extra=gpu is specified, taking the OS environment into account.
Step 2: Create Lock Files for All Environments with uv lock
Once pyproject.toml is ready, run the following command in your terminal:
# Execute this command in the project root directory
uv lock
When you run this command, uv analyzes the settings in pyproject.toml and writes dependency information corresponding to all combinations of CPU/GPU and macOS/Linux into a file called uv.lock.
This uv.lock file acts as a "blueprint" to guarantee the same installation results in any environment. Project members can share this file via a Git repository so that everyone uses the same versions of the packages.
Step 3: Automate Installation with a Makefile
Finally, write the command executed by the user and the logic to determine the presence of a GPU in a Makefile. This allows users to use a unified command, make install, without having to worry about their environment.
Create a file named Makefile in the root directory of your project with the following content:
# Note: Use tab characters for indentation in Makefiles
# Set HAS_CUDA variable to 1 if the `nvcc` command exists, otherwise 0
HAS_CUDA := $(shell command -v nvcc 2>&1 /dev/null && echo 1 || echo 0)
.PHONY: install
install: ## Project setup
@if [ $(HAS_CUDA) -eq 1 ]; then \
echo "✅ GPU (CUDA) environment detected. Installing GPU version of PyTorch."; \
uv sync --all-groups --extra=gpu; \
else \
echo "✅ CPU environment detected. Installing CPU version of PyTorch."; \
uv sync --all-groups --extra=cpu; \
fi
Explanation: The Role of the Makefile
This Makefile performs the following processes when the make install command is executed:
-
HAS_CUDA := ...- First, it checks if the
nvcccommand is executable (i.e., if it's in thePATH).nvccis the NVIDIA CUDA compiler command; if it exists, it can be determined that the environment is capable of using a GPU (CUDA). The result is stored in theHAS_CUDAvariable as1(exists) or0(does not exist).
- First, it checks if the
-
install:- The main body of
make install. It uses a conditional branchif [ $(HAS_CUDA) -eq 1 ]to switch the command to be executed based on the value of theHAS_CUDAvariable.-
If a GPU exists: Runs
uv sync --all-groups --extra=gpu. Because--extra=gpuis specified,uvincludes thegpugroup (GPU version oftorch) defined in[project.optional-dependencies]ofpyproject.tomlin the installation targets. -
If no GPU exists: Runs
uv sync --all-groups --extra=cpu. Similarly,--extra=cpuis specified, and the CPU version oftorchis installed.
-
If a GPU exists: Runs
- The main body of
uv sync is a command that installs (synchronizes) packages into the environment according to the contents of the uv.lock file.
Now, simply running make install will automatically call the appropriate uv sync command for the environment.
5. How to Use: Just Run make install
Everything is now ready.
Project setup can be completed in the following two steps.
-
Generate the lock file (first time only)
If you editpyproject.tomlor are working with a newly acquired project, first runuv lockto generate or update theuv.lockfile.
uv lock
-
Build the environment
After that, simply run the following command in any environment (macOS/CPU, Linux/GPU, etc.).
When executed, the Makefile will automatically determine the presence of a GPU and install the PyTorch version best suited for your environment.
When a new member joins the team, they can start developing immediately just by cloning the repository and runningmake install.
make install
Verifying the Installation
You can check if the environment was built correctly with the following command:
uv run python -c "import torch; print(f'PyTorch {torch.__version__}'); print(f'CUDA available: {torch.cuda.is_available()}')"
Expected output:
-
CPU environment:
CUDA available: False -
GPU environment:
CUDA available: True
6. Troubleshooting
Here are some anticipated issues and their solutions.
Error: No matching distribution found for torch
If this error occurs during uv sync or uv lock, there are several possible causes.
-
Cause: Incorrect index settings in
pyproject.tomlor network issues. -
Solutions:
- Try running
uv lock --refreshto regenerate the lock file. - Verify that the URL configured in
[[tool.uv.index]]ofpyproject.tomlis correct (especially the CUDA version, etc.). - Check for network connection issues, such as firewalls.
- Try running
CPU version is installed on a GPU machine
-
Cause: The
Makefilecannot correctly identify the GPU environment. This happens when thenvcccommand is not in yourPATH. -
Solutions:
- Run
which nvcc(macOS/Linux) in your terminal to check the location of thenvcccommand. - If the command is not found, please check if the CUDA Toolkit is correctly installed.
- If it is installed but not found, review your
PATHenvironment variable settings in files like.bashrcor.zshrc.
- Run
# Example: Add to .bashrc
export PATH=/usr/local/cuda/bin:$PATH
7. Appendix: Comparison with the --torch-backend=auto Option
uv also features a preview capability: uv pip install torch --torch-backend=auto.
This feature is extremely convenient because it automatically detects the local environment's CUDA version and installs the optimal PyTorch. Users do not need to worry about specific CUDA versions (e.g., 12.1, 12.6).
However, this method is an approach that "discovers" the optimal package at installation time, rather than locking the dependencies for all environments in advance via uv lock. Consequently, using this feature means you cannot achieve the goal of this article: "supporting all OS, CPU, and GPU environments with a single lock file."
If you need to guarantee strictly identical dependencies across all members and environments in team development, the method of defining detailed settings in pyproject.toml as introduced in this article is more suitable.
8. Summary
In this article, we explained how to use uv to abstract environmental differences such as OS and the presence of a GPU, and automate PyTorch installation with a single command.
The benefits of this approach are as follows:
-
Simplified environment setup: Anyone can set up the environment with a single
make installcommand. -
Centralized configuration management: Dependencies for all environments can be managed via
pyproject.tomlanduv.lock. - CI/CD compatibility: The workflow becomes simpler, improving maintainability.
Discussion