Overview
Bitbucket Pipelines can automate Validatar test content promotion across environments and run data quality gates as part of your pipelines. This guide covers two use cases:
- CI/CD — Export/Import: Export tests and jobs from a source Validatar project and import them into a target project
- Orchestration — Execute and Gate: Run Validatar tests as a pipeline step, failing the build if quality checks don't pass
Prerequisites
- Validatar API token with Authoring and Execution scopes — see User Tokens
- Bitbucket repository with Pipelines enabled
- Source and target Validatar project IDs and data source IDs
- Python 3.x available in the pipeline image
Note: For Validatar Server behind a firewall, you'll need a self-hosted runner with network access to the Validatar API endpoint.
Step 1: Configure Repository Variables
Store your Validatar credentials as secured repository variables under Repository settings > Pipelines > Repository variables:
| Variable Name | Secured | Value |
|---|---|---|
VALIDATAR_API_TOKEN |
Yes | Your Validatar API token |
SOURCE_VALIDATAR_API_URL |
No | Source instance URL (e.g., https://dev.cloud.validatar.com) |
TARGET_VALIDATAR_API_URL |
No | Target instance URL (e.g., https://prod.cloud.validatar.com) |
SOURCE_PROJECT_ID |
No | Source Validatar project ID |
TARGET_PROJECT_ID |
No | Target Validatar project ID |
EXPORT_TEST_FOLDER_IDS |
No | Comma-separated test folder IDs to export |
EXPORT_JOB_FOLDER_IDS |
No | Comma-separated job folder IDs to export |
DATA_SOURCE_MAPPINGS |
No | Data source mappings (e.g., 42:87,43:88) |
Tip: Mark the API token as Secured so it's masked in build logs. URLs and IDs can remain unsecured for easier debugging.
Use Case 1: Export/Import on Merge to Main
Create or update bitbucket-pipelines.yml in your repository root:
image: python:3.11-slim
pipelines:
branches:
main:
- step:
name: Promote Validatar Tests
script:
- pip install requests
- python scripts/validatar-promote.py
custom:
promote-tests:
- variables:
- name: SOURCE_PROJECT_ID
- name: TARGET_PROJECT_ID
- name: EXPORT_TEST_FOLDER_IDS
- name: EXPORT_JOB_FOLDER_IDS
- name: DATA_SOURCE_MAPPINGS
- step:
name: Promote Validatar Tests (Manual)
script:
- pip install requests
- python scripts/validatar-promote.py
Create scripts/validatar-promote.py in your repository:
import requests
import base64
import os
import sys
# ---------------------------------------------------------------------------
# Configuration — from Bitbucket repository variables
# ---------------------------------------------------------------------------
source_url = os.environ["SOURCE_VALIDATAR_API_URL"]
source_token = os.environ["VALIDATAR_API_TOKEN"]
source_proj = int(os.environ["SOURCE_PROJECT_ID"])
target_url = os.environ["TARGET_VALIDATAR_API_URL"]
target_token = os.environ["VALIDATAR_API_TOKEN"]
target_proj = int(os.environ["TARGET_PROJECT_ID"])
test_folder_ids = [int(x) for x in os.environ.get("EXPORT_TEST_FOLDER_IDS", "").split(",") if x.strip()]
job_folder_ids = [int(x) for x in os.environ.get("EXPORT_JOB_FOLDER_IDS", "").split(",") if x.strip()]
# Data source mapping: "source_id:target_id,source_id:target_id"
ds_mappings = []
raw = os.environ.get("DATA_SOURCE_MAPPINGS", "")
if raw:
for pair in raw.split(","):
src, tgt = pair.strip().split(":")
ds_mappings.append({"sourceFileKey": src.strip(), "targetId": int(tgt.strip())})
# ---------------------------------------------------------------------------
# Step 1 — Export from source project
# ---------------------------------------------------------------------------
print(f"=== EXPORT ===")
print(f"Source: {source_url} / Project {source_proj}")
print(f"Test folders: {test_folder_ids}")
print(f"Job folders: {job_folder_ids}")
source_headers = {"x-val-api-token": source_token, "Content-Type": "application/json"}
export_resp = requests.post(
f"{source_url}/core/api/v1/projects/{source_proj}/export",
headers=source_headers,
json={
"name": "bitbucket-pipelines-export",
"description": f"Automated export from Bitbucket pipeline #{os.environ.get('BITBUCKET_BUILD_NUMBER', 'manual')}",
"testFolderIds": test_folder_ids,
"jobFolderIds": job_folder_ids,
"includeCustomFields": True,
"includeJobSchedules": False,
}
)
if export_resp.status_code != 200:
print(f"ERROR: Export failed ({export_resp.status_code}): {export_resp.text}")
sys.exit(1)
export_data = export_resp.json()
content_b64 = export_data["contentBase64"]
print(f"Export successful: {export_data['fileName']} ({len(content_b64)} chars)")
# Optional: save export XML as a build artifact for troubleshooting
xml_bytes = base64.b64decode(content_b64)
with open("validatar-export.xml", "wb") as f:
f.write(xml_bytes)
print("Export XML saved as build artifact: validatar-export.xml")
# ---------------------------------------------------------------------------
# Step 2 — Import into target project
# ---------------------------------------------------------------------------
print(f"\n=== IMPORT ===")
print(f"Target: {target_url} / Project {target_proj}")
print(f"Data source mappings: {len(ds_mappings)}")
print(f"Conflict resolution: Overwrite")
target_headers = {"x-val-api-token": target_token, "Content-Type": "application/json"}
import_resp = requests.post(
f"{target_url}/core/api/v1/projects/{target_proj}/import",
headers=target_headers,
json={
"mimeType": "application/xml",
"contentBase64": content_b64,
"dataSourceMappings": ds_mappings,
"includeJobSchedules": False,
"conflictResolution": "Overwrite",
}
)
if import_resp.status_code == 200:
print("Import successful.")
sys.exit(0)
else:
print(f"ERROR: Import failed ({import_resp.status_code})")
try:
err = import_resp.json()
for e in err.get("validationErrors", []):
print(f" Validation: {e}")
for ds in err.get("invalidDataSources", []):
print(f" Invalid data source: {ds.get('sourceFileKey')}")
for e in err.get("commitErrors", []):
print(f" Commit: {e}")
except Exception:
print(import_resp.text)
sys.exit(1)
To save the export XML as a downloadable build artifact, add an artifacts section to the step:
- step:
name: Promote Validatar Tests
script:
- pip install requests
- python scripts/validatar-promote.py
artifacts:
- validatar-export.xml
Use Case 2: Execute Tests as a Pipeline Gate
This pipeline step executes a Validatar job and fails the build if any tests don't pass.
Add to bitbucket-pipelines.yml:
pipelines:
branches:
main:
- step:
name: Run Validatar Quality Gate
script:
- pip install requests
- python scripts/validatar-quality-gate.py
custom:
quality-gate:
- variables:
- name: VALIDATAR_PROJECT_ID
- name: VALIDATAR_JOB_ID
- step:
name: Run Validatar Quality Gate (Manual)
script:
- pip install requests
- python scripts/validatar-quality-gate.py
Create scripts/validatar-quality-gate.py:
import requests
import time
import os
import sys
# ---------------------------------------------------------------------------
# Configuration — from Bitbucket repository variables
# ---------------------------------------------------------------------------
api_url = os.environ.get("SOURCE_VALIDATAR_API_URL", os.environ.get("VALIDATAR_API_URL", ""))
token = os.environ["VALIDATAR_API_TOKEN"]
proj_id = int(os.environ["VALIDATAR_PROJECT_ID"])
job_id = int(os.environ["VALIDATAR_JOB_ID"])
timeout = int(os.environ.get("VALIDATAR_TIMEOUT", "1800")) # 30 minutes default
poll_sec = 5
headers = {"x-val-api-token": token, "Content-Type": "application/json"}
base = f"{api_url}/core/api/v1"
build_num = os.environ.get("BITBUCKET_BUILD_NUMBER", "unknown")
# ---------------------------------------------------------------------------
# Step 1 — Execute the job
# ---------------------------------------------------------------------------
print(f"=== QUALITY GATE (Build #{build_num}) ===")
print(f"Executing job {job_id} in project {proj_id}...")
resp = requests.post(f"{base}/projects/{proj_id}/jobs/{job_id}/execute", headers=headers)
if resp.status_code != 200:
print(f"ERROR: Execute failed ({resp.status_code}): {resp.text}")
sys.exit(1)
batch_key = resp.json()["executionKey"]
print(f"Execution started. Batch key: {batch_key}")
# ---------------------------------------------------------------------------
# Step 2 — Poll for results
# ---------------------------------------------------------------------------
results_url = f"{base}/projects/{proj_id}/jobs/{job_id}/results/{batch_key}"
elapsed = 0
result = None
while elapsed < timeout:
time.sleep(poll_sec)
elapsed += poll_sec
try:
poll = requests.get(results_url, headers=headers)
poll.raise_for_status()
except requests.RequestException as e:
print(f" [{elapsed}s] Poll error: {e}. Retrying...")
continue
result = poll.json()
status = result.get("status", "Unknown")
print(f" [{elapsed}s] Status: {status}")
if status in ("Completed", "Aborted", "Abandoned"):
break
else:
print(f"ERROR: Timeout after {timeout}s. Job may still be running in Validatar.")
sys.exit(1)
# ---------------------------------------------------------------------------
# Step 3 — Evaluate results
# ---------------------------------------------------------------------------
status = result.get("status")
print(f"\nJob finished with status: {status}")
print(f" Started: {result.get('dateStarted')}")
print(f" Completed: {result.get('dateCompleted')}")
if status != "Completed":
print(f"FAILED: Job ended with status '{status}'.")
sys.exit(1)
tests = result.get("tests", [])
failed = [t for t in tests if t.get("status") not in ("Passed", "Completed")]
print(f"\n{len(tests)} test(s) executed:")
for t in tests:
marker = "PASS" if t.get("status") in ("Passed", "Completed") else "FAIL"
print(f" [{marker}] {t.get('name')} (ID: {t.get('id')}) — {t.get('status')}")
if failed:
print(f"\n{len(failed)} test(s) did not pass. Quality gate FAILED.")
sys.exit(1)
print(f"\nAll {len(tests)} test(s) passed. Quality gate OK.")
sys.exit(0)
Combining Both Use Cases
You can chain the export/import and quality gate steps in a single pipeline:
pipelines:
custom:
promote-and-validate:
- step:
name: Export/Import Tests
script:
- pip install requests
- python scripts/validatar-promote.py
artifacts:
- validatar-export.xml
- step:
name: Run Quality Gate on Target
script:
- pip install requests
- python scripts/validatar-quality-gate.py
This exports tests from the source project, imports them into the target, and then runs a quality gate job in the target environment to verify the imported tests work correctly.
Running with Custom Pipelines (Manual Trigger)
Bitbucket custom pipelines can be triggered manually from the Pipelines page:
- Go to Pipelines in your Bitbucket repository
- Click Run pipeline
- Select the branch
- Choose the custom pipeline (e.g.,
promote-tests) - Fill in any variable overrides
- Click Run
Troubleshooting
Pipeline Fails with "401" or "403"
- Verify the
VALIDATAR_API_TOKENrepository variable is set and marked as Secured - Confirm the token has the required scopes (Authoring for export/import, Execution for quality gate)
- Check that the token user has access to the specified projects
Import Fails with Data Source Mapping Errors
- Use the
GET /core/api/v1/data-sourcesendpoint to list data sources on both instances - Verify every data source referenced in the export has a corresponding mapping entry
- The
sourceFileKeyis the data source ID from the source project, passed as a string
"pip: command not found"
- Ensure the pipeline image includes Python 3. The
python:3.11-slimimage is recommended - If using a custom image, add
apt-get install -y python3 python3-pipto the script
Self-Hosted Runner Cannot Reach Validatar
- Ensure the runner has network access to the Validatar API endpoint
- For Validatar Server instances, check firewall rules and proxy settings
- See Bitbucket self-hosted runner documentation for setup details
Quality Gate Times Out
- Increase the
VALIDATAR_TIMEOUTrepository variable if jobs run longer than 30 minutes - Check the Validatar UI to see if the job is stuck or has errored
- Verify the job has tests assigned to it
Export XML Artifact Not Available
- Ensure the
artifactssection is indented under the step that generates the file - The file must be written to the working directory (the script does this by default)