2025.1 Release Notes
  • 25 Mar 2025
  • 8 Minutes to read
  • PDF

2025.1 Release Notes

  • PDF

Article summary

Validatar has a new release! From major updates to minor bug fixes, we’re constantly working to improve your Validatar experience so you can continue to build trust in your data. Learn about what’s new below.

Release Date

Validatar 2025.1 will be released on March 27, 2025.

What’s Coming!

Validatar Trust Scores

Validatar 2025.1 introduces an entirely new concept of Trust Scores throughout Validatar. Trust Scores were designed to accomplish the following goals within the product:

  • Establish a unified approach to defining trust

  • Take into account multi-faceted nature of trust

  • Flexible and dynamic across your data sources and object types

  • Customizable to your Data Governance requirements

  • Scale automatically as your scope increases

Validatar Trust Scores work by capturing all the relevant test execution results for each table over a period of time and converting those results into a single score for the table. The trust score is calculated daily per relevant table on a 0 to 5 scale.  The test executions included in a given day’s scores are based on a trailing time frame (default 14 days, but can be customized). Each day’s scores are then saved and stored over time so you can see how each object’s scores have changed over time as well as see how the scores aggregate by data source and schema.

This Trust Score value itself is calculated using two components: Quality and Coverage

Quality Scores

Your Quality Score is attempting to capture the concept of are the tests that I am running for a particular object passing. To this end they follow the following rules

  • Quality Scores for a table are affected by tests run to the repository that have a metadata link to the table or any of the table’s columns

  • Quality Scores are weighted by the severity level of the Tests

  • Configuration admins can manage the relative weights of each severity level

  • Each Metadata Link can be set to not effect trust score calculation if desired.

The Quality Score is a 0-100% value that is calculated based on the formula.

Coverage Scores

Your Coverage Score is attempting to capture the concept of am I running enough different distinct types of tests. To this end they follow the following rules

  • Coverage score is a 0-100% value

  • Coverage scores for a table are affected by tests run to the repository that have a metadata link to the table or any of the table’s columns

  • Coverage score is based on the distinct count of Quality Dimensions represented in the tests that have been run during the preceding X days

  • Configuration admins can manage the cumulative coverage score for each successive increase in the distinct count of Quality Dimensions

Impact Ratings

Validatar 2025.1 also introduces the concept of Impact Ratings. Impact Ratings work in the following ways:

  • Impact Ratings can be set by users per table to capture the business impact and significance of each table/object

  • Impact Ratings drive the target trust score for all calculations regarding whether an object is above or below it’s target score and thus whether the object can or cannot be trusted.

  • Configuration admins can manage the list of Impact Ratings and the Target Trust Score for each Impact Rating

  • Default Impact Ratings can be set per Data Source/Schema and updated in bulk as needed

Trust Score Management

All aspects of managing and configuring Trust Score related elements within Validatar have been consolidated in the Settings area into a new Trust Score section within Settings. Here you can manage Impact Ratings, Quality Dimensions, Severity Levels and default Trust Score calculation approaches.

Additionally, for each data source, schema, and table, trust score calculation settings can be overridden so that the objects at that level or below can be managed differently than the global defaults.

Trust Score Management is a licensed feature. Please contact sales@validatar.com for more information.

New Test Grids and Interactive KPI Header

The list of tests and folders grid UI has been upgraded with the following elements:

  • A clickable KPI header section appears at the top of every grid displaying tests. This KPI header can be set to four different variants:

    • Summary: Best for getting test execution and test creation/modification metrics

    • Current Status: Best for drilling down into tests by status

    • Severity: Best for seeing pass/fail/error rates by severity level

    • Quality Dimension: Best for seeing coverage and performance by quality dimension

  • New default columns in the grid including:

    • Overall Status

    • Severity Level

    • Quality Dimension

    • # Tests

    • # Jobs

  • The ability to select the columns that are shown in the grid. This column selection includes test performance metrics, data source information and project custom fields.

  • The ability to expand all sub-folders so only tests and template tests are listed.

  • The ability to search/filter across all tests within a folder and any subfolders.

  • The ability to export the grid values to Excel.

Explorer is now Catalog

The Explorer section of Validatar has been renamed to Catalog and the pages within the Catalog have had a user experience refresh in order to be better provide users with the most relevant details first. Included in this update is the following elements:

  • Updated grid UI throughout the Catalog.

    • This includes updates for the following grid levels:

      • Data Source

      • Schema

      • Table

      • Column

    • All of the above grid views now allow for the following capabilities:

      • The ability to select the columns that are shown in the grid. This column selection includes test performance metrics, profile values, and catalog custom fields.

      • The ability to search/filter across all objects at a particular level across a data source.

      • The ability to export the grid values to Excel.

  • New Data Source Info tab

    • Centers the trust scores and displays a grid of tables by impact rating and trusted table category as well as a trend of average trust score vs target over time.

    • Summarizes the Current Test Status with clickable KPIs that drill into the Tests tab immediately

    • Displays Test Execution History trend charts and breakdowns by Severity and Quality Dimension

    • Integrates the Metadata and Discussion elements into this primary tab as expandable components on the right hand side.

  • New Schema Info tab

    • Centers the trust scores and displays a grid of tables by impact rating and trusted table category as well as a trend of average trust score vs target over time.

    • Summarizes the Current Test Status with clickable KPIs that drill into the Tests tab immediately

    • Displays Test Execution History trend charts and breakdowns by Severity and Quality Dimension

    • Integrates the Metadata and Discussion elements into this primary tab as expandable components on the right hand side.

  • New Table Info tab

    • Centers the trust scores and displays a the components that lead to the trust score value and target as well as a trend of calculated trust score vs target over time.

    • Summarizes the Current Test Status with clickable KPIs that drill into the Tests tab immediately

    • Displays Test Execution History trend charts and breakdowns by Severity and Quality Dimension

    • Integrates the Metadata and Discussion elements into this primary tab as expandable components on the right hand side.

  • New Column Info tab

    • Summarizes the Current Test Status with clickable KPIs that drill into the Tests tab immediately

    • Displays Test Execution History trend charts and breakdowns by Severity and Quality Dimension

    • Displays profile values over time and by distribution chart

    • Integrates the Metadata and Discussion elements into this primary tab as expandable components on the right hand side.

Preview Tab

New Preview tab has been added at the Table level in the Catalog. This enables you to quickly preview the data for a given table directly from the catalog without having to create a test. Out of the box previews are designed to bring back the raw data from the table/object by using default macros, but the query/script can be customized as needed, or any other macro can be selected as well.

Benefit: Don’t leave the data catalog view if you just want to see a little more detail about a table.

Multiple Sources for Custom Metadata in a Single Data Source

Each Data Source can now have multiple separate source queries, scripts or files for ingesting custom field metadata. This allows a single Data Source to have multiple custom fields populated automatically from separate source systems and on separate schedules using completely different connection information.

Existing custom metadata scripts will automatically be migrated over to new model and all schedules will continue to function as previously set. Now end users can simply add additional scripts with different connection information to pull in additional custom fields as needed.

Benefit: Allows for easier integration with external data catalogs as well as enrichment from metadata driven ETL tools, AI providers or other script driven metadata services.

Other Highlighted Enhancements

  • Test Connection button now enabled for Python data sources. Previously this button was disabled. Now it will execute the pre- and post- execution scripts and let the user know if any issues occurred during execution.

  • Custom Field values can now be certified by end users.

  • Custom Field value history can now be seen in the catalog.

  • Process of creating an export file of tests/jobs now has option to include/exclude custom field values.

  • Import process can now handle skipping broken metadata links if they fail to be found during import.

  • Project Custom Fields are now manageable via API.

  • Project Custom Fields are now available in the template test context so that templates can treat project metadata like environment variables.

  • Tests can calculate a partial trust score credit based on percentage of passing rows. This enables data quality tests to not be 100% pass/fail in terms of their impact on a table’s trust score.

  • Added Icons for Quality Dimension and Severity Level throughout platform and the ability to edit them in the Trust Score Settings subsection.

Bug Fixes

  • Resolved issue where Event History Report filter modal would break with certain Event Type filters.

  • Resolved issue where custom field value changes were not being applied during import or child test materialization.

  • Resolved issue where multi-column results page would not load detail section on initial page load.

  • Resolved issue with report filters that were incorrectly handling time zone support for DateTimeOffset fields.

  • Resolved issue where materialization of child tests could trigger separate email notifications per child test.

  • Resolved issue with links in emails from Validatar Cloud not using custom subdomain URLs.

  • Resolved issue with archived custom fields incorrectly being re-used when new custom fields were being created.


Was this article helpful?