Apache Polaris (release/1.1.x) build log — Rancher Desktop (containerd/nerdctl)

Note:
If you can use Docker Desktop or a standard Docker environment, please follow the official quickstart guide.
The official assets are well-maintained and should work out-of-the-box.
The steps below are an adapted version for environments where Docker Desktop cannot be used (e.g., Rancher Desktop withnerdctl
or other containerd-based setups).
My machine falls into this category, so I prepared these modified assets for local testing.
Environment
- Rancher Desktop (containerd / nerdctl)
- JDK 21
- Polaris branch: release/1.1.x
Repository setup
# Clone if not already present
git clone https://github.com/apache/polaris.git
cd polaris
# Switch to release branch
git fetch origin
git checkout release/1.1.x
Build steps
# Quarkus fast-jar build
./gradlew :runtime:server:quarkusBuild -Dquarkus.package.type=fast-jar
./gradlew :runtime:admin:quarkusBuild -Dquarkus.package.type=fast-jar
# Build images with nerdctl (instead of Docker)
nerdctl build -t polaris-server:local \
-f runtime/server/src/main/docker/Dockerfile.jvm \
runtime/server
nerdctl build -t polaris-admin:local \
-f runtime/admin/src/main/docker/Dockerfile.jvm \
runtime/admin
Artifacts
- polaris-server:local
- polaris-admin:local
Verification
nerdctl images | grep polaris

Run Polaris (standalone mode)
We have two main ways to start Polaris in standalone mode after building it:
1. Using Gradle directly
This may be convenient for development and testing:
./gradlew run -Dpolaris.bootstrap.credentials=POLARIS,root,secret
- Runs the server in Quarkus dev mode
- Supports hot reload for code changes (to be confirmed)
2.Running the packaged JAR
This is closer to "production style":
java -jar runtime/server/build/quarkus-app/quarkus-run.jar \
-Dpolaris.bootstrap.credentials=POLARIS,root,secret
- Runs the packaged Quarkus "fast-jar"
- No Gradle wrapper needed at runtime
After running...
In both modes, Polaris uses the following default ports:
- API:
http://localhost:8181
- Admin / Health:
http://localhost/8182

Run Polaris (containerized)
Tag with officially expected name
Before running the containers, tag your locally built images with the names that the official docker-compose
assets expect:
nerdctl tag polaris-server:local apache/polaris:latest
nerdctl tag polaris-admin:local apache/polaris-admin-tool:latest
This ensures that your local build is compatible with the compose files without further modification.
Prepare assets
Copy the official getting-started
assets into your working directory (polaris-test in this example).
Some files may need slight modification for your environment.
cd polaris-test
cp ${polaris-repo-dir}/getting-started/assets/postgres/docker-compose-postgres.yml .
cp ${polaris-repo-dir}/getting-started/assets/postgres/postgresql.conf .
cp ${polaris-repo-dir}/getting-started/jdbc/docker-compose-bootstrap-db.yml .
cp ${polaris-repo-dir}/getting-started/jdbc/docker-compose.yml .
cp ${polaris-repo-dir}/getting-started/assets/polaris/create-catalog.sh .
cp ${polaris-repo-dir}/getting-started/assets/polaris/obtain-token.sh .
docker-compose-postgres.yml
Defines the PostgreSQL service used by Polaris for persistence.
- Uses Postgres 17.6.
- Configured with POLARIS as the default database.
- Mounts a local
postgresql.conf
for custom tuning. - Provides a health check (
pg_isready
).
services:
postgres:
image: postgres:17.6
ports:
- "5432:5432"
# set shared memory limit when using docker-compose
shm_size: 128mb
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: POLARIS
POSTGRES_INITDB_ARGS: "--encoding UTF8 --data-checksums"
volumes:
# Bind local conf file to a convenient location in the container
- type: bind
source: ./postgresql.conf # to be modified for your test environment
target: /etc/postgresql/postgresql.conf
command:
- "postgres"
- "-c"
- "config_file=/etc/postgresql/postgresql.conf"
healthcheck:
test: "pg_isready -U postgres"
interval: 5s
timeout: 2s
retries: 15
docker-compose-bootstrap-db.yml
Runs the Polaris admin tool once to initialize the database schema and create the first realm and credentials.
services:
polaris-bootstrap:
image: ${POLARIS_ADMIN_IMAGE:-apache/polaris-admin-tool:latest}
environment:
- POLARIS_PERSISTENCE_TYPE=relational-jdbc
- QUARKUS_DATASOURCE_JDBC_URL=${QUARKUS_DATASOURCE_JDBC_URL}
- QUARKUS_DATASOURCE_USERNAME=${QUARKUS_DATASOURCE_USERNAME}
- QUARKUS_DATASOURCE_PASSWORD=${QUARKUS_DATASOURCE_PASSWORD}
command:
- "bootstrap"
- "--realm=POLARIS"
- "--credential=POLARIS,${CLIENT_ID},${CLIENT_SECRET}"
This ensures that the POLARIS realm and the root client (CLIENT_ID/CLIENT_SECRET) exist before starting the Polaris server.
docker-compose-polaris.yml
Defines the Polaris server itself.
- Runs on ports 8181 (API), 8182 (management/health), and optionally 5005 (debugger).
- Connects to the PostgreSQL database using JDBC.
- Configured with realm and bootstrap credentials.
- Development-friendly: allows insecure storage types (FILE/S3/GCS/Azure) and ignores readiness checks.
services:
polaris:
image: ${POLARIS_IMAGE:-apache/polaris:latest}
ports:
# API port
- "8181:8181"
# Management port (metrics and health checks)
- "8182:8182"
# Optional, allows attaching a debugger to the Polaris JVM
- "5005:5005"
environment:
JAVA_DEBUG: true
JAVA_DEBUG_PORT: "*:5005"
POLARIS_PERSISTENCE_TYPE: relational-jdbc
POLARIS_PERSISTENCE_RELATIONAL_JDBC_MAX_RETRIES: 5
POLARIS_PERSISTENCE_RELATIONAL_JDBC_INITIAL_DELAY_IN_MS: 100
POLARIS_PERSISTENCE_RELATIONAL_JDBC_MAX_DURATION_IN_MS: 5000
QUARKUS_DATASOURCE_JDBC_URL: $QUARKUS_DATASOURCE_JDBC_URL
QUARKUS_DATASOURCE_USERNAME: $QUARKUS_DATASOURCE_USERNAME
QUARKUS_DATASOURCE_PASSWORD: $QUARKUS_DATASOURCE_PASSWORD
POLARIS_REALM_CONTEXT_REALMS: POLARIS
QUARKUS_OTEL_SDK_DISABLED: true
POLARIS_BOOTSTRAP_CREDENTIALS: POLARIS,$CLIENT_ID,$CLIENT_SECRET
polaris.features."ALLOW_INSECURE_STORAGE_TYPES": "true"
polaris.features."SUPPORTED_CATALOG_STORAGE_TYPES": "[\"FILE\",\"S3\",\"GCS\",\"AZURE\"]"
polaris.readiness.ignore-severe-issues: "true"
healthcheck:
test: ["CMD", "curl", "http://localhost:8182/q/health"]
interval: 2s
timeout: 10s
retries: 10
start_period: 10s

Run Polaris (containerized) 2
Example .env
# --- Required ---
QUARKUS_DATASOURCE_JDBC_URL=jdbc:postgresql://postgres:5432/POLARIS
QUARKUS_DATASOURCE_USERNAME=postgres
QUARKUS_DATASOURCE_PASSWORD=postgres
# Initial bootstrap credentials
CLIENT_ID=root
CLIENT_SECRET=s3cr3t
# Default storage location for the quickstart catalog
STORAGE_LOCATION=quickstart_catalog
# --- Optional overrides (use these if you want to test with local builds) ---
POLARIS_IMAGE=apache/polaris:latest
POLARIS_ADMIN_IMAGE=apache/polaris-admin-tool:latest
- The
QUARKUS_DATASOURCE_*
variables configure the JDBC connection to Postgres. -
CLIENT_ID
andCLIENT_SECRET
define the root credentials used during bootstrap. -
STORAGE_LOCATION
is passed to the catalog creation script (e.g.,file:///var/tmp/quickstart_catalog/
). - If you’ve built Polaris locally and tagged your images (as shown earlier), you can override the image names via
POLARIS_IMAGE
andPOLARIS_ADMIN_IMAGE
.

Run Polaris with Containerized Setup
This walkthrough shows how to start Polaris and bootstrap it with a Postgres backend, the Admin tool, and a sample catalog.
1. Start Postgres
nerdctl compose --env-file .env -f docker-compose-postgres.yml up -d
- Brings up a Postgres 17.6 container.
- Database
POLARIS
will be created with userpostgres/postgres
. - Healthcheck ensures Postgres is ready before moving on.
Check:
nerdctl ps
and verify port 5432 is listening.
You can also connect with:
psql -h localhost -U postgres -d POLARIS
2. Run Admin Tool (bootstrap database)
nerdctl compose --env-file .env -f docker-compose-bootstrap-db.yml up
- Runs
apache/polaris-admin-tool:latest
. - Creates the required schema
polaris_schema
in Postgres. - Bootstraps the
POLARIS
realm and initial credentials (CLIENT_ID / CLIENT_SECRET
from .env).
Check:
psql -h localhost -U postgres -d POLARIS -c "\dt polaris_schema.*"
You should see tables like entities, grant_records, etc.
3. Start Polaris Server
nerdctl compose --env-file .env -f docker-compose-polaris.yml up -d
- Runs the apache/polaris:latest server.
- Connects to the Postgres backend.
- Exposes:
- API: http://localhost:8181
- Admin/health: http://localhost:8182
Check health:
curl http://localhost:8182/q/health
Expected output:
{
"status": "UP",
"checks": [
{
"name": "Database connections health check",
"status": "UP"
}
]
}
4. Create a Catalog
Use the provided helper script:
bash create-catalog.sh
This script will:
- Obtain an OAuth token using the root client (
CLIENT_ID, CLIENT_SECRET
). - POST a new catalog definition (
quickstart_catalog
) to Polaris.
⚠️ Note:
The helper scripts (create-catalog.sh, obtain-token.sh
) provided ingetting-started/assets/polaris
assume that you are running them inside the same Docker Compose network.
Therefore, they target the service by name (http://polaris:8181
).
If you run them on the host machine (outside the container network), you need to modify the scripts so that they usehttp://localhost:8181
instead.
Check:
curl -s -H "Authorization: Bearer $TOKEN" \
-H "Polaris-Realm: POLARIS" \
http://localhost:8181/api/management/v1/catalogs | jq .
Expected output (simplified):
{
"catalogs": [
{
"name": "quickstart_catalog",
"type": "INTERNAL",
"storageConfigInfo": {
"storageType": "FILE",
"allowedLocations": [
"file:///var/tmp/quickstart_catalog/"
]
}
}
]
}
✅ At this point, Polaris is fully running with:
- Postgres persistence
- Realm and credentials bootstrapped
- A working catalog (quickstart_catalog)

Trino with Polaris (CRUD Quickstart)
1. Architecture
Trino ──(OAuth2)──> Polaris REST Catalog
│
└─> Warehouse storage (local FS or S3/MinIO)
- Polaris handles the catalog and issues OAuth2-protected APIs.
- Trino connects to Polaris and writes data files (Parquet + metadata) to the warehouse location.
- In this example:
warehouse = /var/tmp
inside Trino container (mounted to host volume).
2. docker-compose-trino.yml
services:
trino:
image: trinodb/trino:latest
ports:
- "8080:8080"
environment:
- CLIENT_ID=${CLIENT_ID}
- CLIENT_SECRET=${CLIENT_SECRET}
volumes:
- ./trino-config/catalog:/etc/trino/catalog
- whdata:/var/tmp
volumes:
whdata:
- Use a named volume
whdata
so that warehouse data (Parquet + metadata) persists.
3. Trino Catalog Config
Create trino-config/catalog/iceberg.properties
:
connector.name=iceberg
iceberg.catalog.type=rest
iceberg.rest-catalog.uri=http://polaris:8181/api/catalog
iceberg.rest-catalog.security=OAUTH2
iceberg.rest-catalog.oauth2.credential=${ENV:CLIENT_ID}:${ENV:CLIENT_SECRET}
iceberg.rest-catalog.oauth2.scope=PRINCIPAL_ROLE:ALL
iceberg.rest-catalog.warehouse=quickstart_catalog
# Enable Hadoop-style FS for local file system support
fs.hadoop.enabled=true
- warehouse must match the Polaris catalog name.
- If using S3/MinIO instead of FS, configure S3 credentials/endpoint here.
4. Start Trino CLI
# trino cli
curl -L https://repo1.maven.org/maven2/io/trino/trino-cli/437/trino-cli-437-executable.jar -o trino
chmod +x trino
Connect:
./trino --server http://localhost:8080 --catalog iceberg
5. CRUD demo
-- Create schema (namespace)
create schema iceberg.default;
-- Create table
create table iceberg.default.events (
id bigint,
ts timestamp,
message varchar
);
-- Insert data
insert into iceberg.default.events values (1, current_timestamp, 'hello');
insert into iceberg.default.events values (2, current_timestamp, 'world');
-- Query
select * from iceberg.default.events;
-- Show definition
show create table iceberg.default.events;
-- Confirms format = 'PARQUET', format_version = 2, and location = file:///var/tmp/....
-- Truncate
truncate table iceberg.default.events;
Notes
- Trino UI at http://localhost:8080 shows cluster overview and running queries.
- Never delete files manually in /var/tmp/quickstart_catalog/... → that will break catalog consistency.
- Local FS (/var/tmp) works fine for demos but requires volume mounting to persist across container restarts.
- For production: use S3 or MinIO with proper credential vending.