The TrueNAS web interface is convenient — but if you manage dozens of datasets, regularly audit snapshots, or coordinate replication jobs across multiple sites, manual administration quickly becomes a bottleneck. The TrueNAS REST API enables full automation of all storage operations: programmatic, reproducible, and ready to integrate with your existing monitoring and orchestration systems.
Setting Up API Access
Creating an API Key
Since TrueNAS CORE 12 and TrueNAS SCALE, a token-based authentication system is available. The API key replaces username and password for API calls and can be revoked at any time.
Create the key via the web interface under Settings > API Keys > Add:
- Choose a descriptive name (e.g.,
monitoring-prod-01) - Copy the generated key immediately — it is only displayed once
- Store the key in a password manager or vault
Alternatively via CLI on TrueNAS SCALE:
midclt call api_key.create '{"name": "monitoring-prod-01"}'
Base URL and Authentication
The API is available at https://<truenas-ip>/api/v2.0/. All endpoints expect the API key in the Authorization header:
curl -k -H "Authorization: Bearer <API-KEY>" \
https://192.168.1.50/api/v2.0/system/info
The response returns JSON with system information: version, hostname, uptime, and hardware details. The -k flag skips certificate verification — in production environments, use a valid TLS certificate.
Managing Datasets
Listing Datasets
curl -k -H "Authorization: Bearer <API-KEY>" \
https://192.168.1.50/api/v2.0/pool/dataset
The response includes all datasets with their properties: quota, compression, mountpoint, and current usage.
Creating a Dataset
curl -k -X POST -H "Authorization: Bearer <API-KEY>" \
-H "Content-Type: application/json" \
-d '{
"name": "tank/backups/server-web-01",
"compression": "zstd",
"quota": 107374182400,
"comments": "Backup target for production web server"
}' \
https://192.168.1.50/api/v2.0/pool/dataset
The quota is specified in bytes — 107374182400 equals 100 GiB. The API automatically validates whether the pool has sufficient capacity.
Modifying Dataset Properties
curl -k -X PUT -H "Authorization: Bearer <API-KEY>" \
-H "Content-Type: application/json" \
-d '{"quota": 214748364800}' \
https://192.168.1.50/api/v2.0/pool/dataset/id/tank%2Fbackups%2Fserver-web-01
Important: The dataset path in the URL must be URL-encoded — forward slashes become %2F.
Automating Snapshots
Creating a Snapshot
curl -k -X POST -H "Authorization: Bearer <API-KEY>" \
-H "Content-Type: application/json" \
-d '{
"dataset": "tank/backups/server-web-01",
"name": "auto-2026-04-21-0200",
"recursive": true
}' \
https://192.168.1.50/api/v2.0/zfs/snapshot
Listing and Filtering Snapshots
# All snapshots of a dataset
curl -k -H "Authorization: Bearer <API-KEY>" \
"https://192.168.1.50/api/v2.0/zfs/snapshot?query-filters=[[\
\"dataset\",\"=\",\"tank/backups/server-web-01\"]]"
Cleaning Up Old Snapshots
Snapshots can be deleted selectively via the API. Combined with a script, you can implement a custom retention policy:
import requests
from datetime import datetime, timedelta
TRUENAS_URL = "https://192.168.1.50/api/v2.0"
API_KEY = "1-xxxxxxxxxxxxxxxxxxxx"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}
RETENTION_DAYS = 30
response = requests.get(
f"{TRUENAS_URL}/zfs/snapshot",
headers=HEADERS,
verify=False,
params={
"query-filters": '[["dataset","=","tank/backups/server-web-01"]]',
"query-options": '{"select": ["id", "name", "properties"]}'
}
)
cutoff = datetime.now() - timedelta(days=RETENTION_DAYS)
for snap in response.json():
creation = datetime.fromtimestamp(
int(snap["properties"]["creation"]["parsed"])
)
if creation < cutoff:
requests.delete(
f"{TRUENAS_URL}/zfs/snapshot/id/{snap['id']}",
headers=HEADERS,
verify=False
)
print(f"Deleted: {snap['id']}")
Controlling Replication
The replication API allows you to create, trigger, and monitor replication jobs between TrueNAS systems.
Creating a Replication Job
curl -k -X POST -H "Authorization: Bearer <API-KEY>" \
-H "Content-Type: application/json" \
-d '{
"name": "offsite-backup-web",
"direction": "PUSH",
"transport": "SSH",
"ssh_credentials": 1,
"source_datasets": ["tank/backups/server-web-01"],
"target_dataset": "backup-pool/offsite/server-web-01",
"recursive": true,
"auto": true,
"retention_policy": "SOURCE",
"schedule": {
"minute": "0",
"hour": "3",
"dom": "*",
"month": "*",
"dow": "*"
}
}' \
https://192.168.1.50/api/v2.0/replication
Checking Replication Status
curl -k -H "Authorization: Bearer <API-KEY>" \
https://192.168.1.50/api/v2.0/replication/id/1
The response includes state, last_snapshot, job_progress, and errors — everything you need for automated monitoring.
Alert System via API
TrueNAS generates alerts for critical events: pool degradation, SMART errors, update availability, or low storage. These alerts can be queried via the API and forwarded to external systems.
# Retrieve active alerts
curl -k -H "Authorization: Bearer <API-KEY>" \
https://192.168.1.50/api/v2.0/alert/list
# Dismiss an alert
curl -k -X POST -H "Authorization: Bearer <API-KEY>" \
-H "Content-Type: application/json" \
-d '{"uuid": "alert-uuid-here"}' \
https://192.168.1.50/api/v2.0/alert/dismiss
Python Wrapper for Recurring Tasks
For more complex automation, a Python wrapper with built-in error handling and logging is recommended:
import requests
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("truenas-api")
class TrueNASAPI:
def __init__(self, host, api_key, verify_ssl=False):
self.base_url = f"https://{host}/api/v2.0"
self.session = requests.Session()
self.session.headers.update({
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
})
self.session.verify = verify_ssl
def get_pools(self):
resp = self.session.get(f"{self.base_url}/pool")
resp.raise_for_status()
return resp.json()
def get_pool_health(self):
pools = self.get_pools()
results = {}
for pool in pools:
results[pool["name"]] = {
"status": pool["status"],
"healthy": pool["healthy"],
"capacity": pool["size"],
"allocated": pool["allocated"]
}
return results
def create_snapshot(self, dataset, name, recursive=False):
resp = self.session.post(
f"{self.base_url}/zfs/snapshot",
json={
"dataset": dataset,
"name": name,
"recursive": recursive
}
)
resp.raise_for_status()
logger.info(f"Snapshot created: {dataset}@{name}")
return resp.json()
def get_alerts(self, dismissed=False):
alerts = self.session.get(
f"{self.base_url}/alert/list"
).json()
if not dismissed:
alerts = [a for a in alerts if not a["dismissed"]]
return alerts
Webhook Integration
Alert data can be forwarded to Slack, Microsoft Teams, or any HTTP endpoint via webhooks:
import json
import requests
def forward_alerts_to_slack(truenas_api, slack_webhook_url):
alerts = truenas_api.get_alerts()
for alert in alerts:
if alert["level"] in ("CRITICAL", "ERROR"):
payload = {
"text": f":warning: *TrueNAS Alert*\n"
f"*Level:* {alert['level']}\n"
f"*Message:* {alert['formatted']}\n"
f"*Source:* {alert['source']}"
}
requests.post(
slack_webhook_url,
json=payload
)
For Microsoft Teams, replace the payload format with an Adaptive Card. The logic remains identical.
Monitoring Integration: Zabbix
Zabbix can query the TrueNAS API directly via HTTP Agent items. Create a template with these items:
| Item | Endpoint | Type | Interval |
|---|---|---|---|
| Pool Status | /api/v2.0/pool | HTTP Agent | 5 min |
| Alert Count | /api/v2.0/alert/list | HTTP Agent | 1 min |
| Replication Status | /api/v2.0/replication | HTTP Agent | 5 min |
| System Info | /api/v2.0/system/info | HTTP Agent | 15 min |
JSON responses can be parsed using Zabbix preprocessing (JSONPath):
$..[?(@.name=="tank")].statusextracts the pool status$.length()counts active alerts$..[?(@.name=="offsite-backup-web")].state.statereturns the replication state
Monitoring Integration: Prometheus
For Prometheus, the community-maintained truenas-exporter is available. Alternatively, you can build a custom exporter with just a few lines of Python:
from prometheus_client import start_http_server, Gauge
import time
pool_status = Gauge(
"truenas_pool_healthy", "Pool health status", ["pool"]
)
pool_used = Gauge(
"truenas_pool_used_bytes", "Pool used bytes", ["pool"]
)
alert_count = Gauge(
"truenas_active_alerts", "Number of active alerts"
)
def collect_metrics(api):
for name, health in api.get_pool_health().items():
pool_status.labels(pool=name).set(1 if health["healthy"] else 0)
pool_used.labels(pool=name).set(health["allocated"])
alert_count.set(len(api.get_alerts()))
if __name__ == "__main__":
api = TrueNASAPI("192.168.1.50", "1-xxxxxxxxxxxx")
start_http_server(9150)
while True:
collect_metrics(api)
time.sleep(60)
The metrics are then available at http://exporter:9150/metrics and can be visualized in Grafana.
Best Practices
- Rotate API keys: Create new keys regularly and revoke old ones
- Mind rate limiting: TrueNAS has no built-in rate limiting — add delays to batch scripts
- Error handling: Check HTTP status codes and implement retries with exponential backoff
- Use TLS: Replace the self-signed certificate with a valid certificate (Let’s Encrypt or internal CA)
- Audit trail: Log all API calls with timestamps and results
Frequently Asked Questions
Which TrueNAS versions support the REST API?
The v2.0 API has been available since TrueNAS CORE 12.0 and TrueNAS SCALE 22.02. Older FreeNAS versions use the v1.0 API with a different endpoint schema.
Can I expose the API over the internet?
We strongly advise against this. Use a VPN tunnel or a reverse proxy with authentication instead. The API provides full access to all storage operations — a compromised key can destroy the entire system.
How do I find the available endpoints?
TrueNAS provides interactive API documentation at https://<truenas-ip>/api/docs/. All endpoints with parameters, examples, and response formats are documented there.
Looking to automate your TrueNAS infrastructure? Contact us — we integrate TrueNAS into your existing monitoring and automation landscape.
More on these topics:
More articles
Backup Strategy for SMBs: Proxmox PBS + TrueNAS as a Reliable Backup Solution
Backup strategy for SMBs with Proxmox PBS and TrueNAS: implement the 3-2-1 rule, PBS as primary backup target, TrueNAS replication as offsite copy, retention policies, and automated restore tests.
Proxmox Notification System: Matchers, Targets, SMTP, Gotify, and Webhooks
Configure the Proxmox notification system from PVE 8.1: matchers and targets, SMTP setup, Gotify integration, webhook targets, notification filters, and sendmail vs. new API.
TrueNAS with MCP: AI-Powered NAS Management via Natural Language
Connect TrueNAS with MCP (Model Context Protocol): AI assistants for NAS management, status queries, snapshot creation via chat, security considerations, and future outlook.