DataCleaner is a data quality analysis tool that allows you to perform data profiling, validating, and minor ETL-like tasks. These activities help you administer and monitor your data quality in order to ensure that your data is useful and applicable to your business situation. It can be used for master data management (MDM) methodologies, data warehousing projects, statistical research, preparation for extract-transform-load activities, and more.
|Tags||Office/Business Scientific/Engineering Information Management Metadata/Semantic Models Records Management Database Data Warehousing Business Intelligence Data Profiling|
Who will post the best content for use in DataCleaner?
Human Inference is announcing a competition for the DataCleaner community. The goal is to...
Release Notes: A major milestone for the data quality monitoring Web application: the addition of connectivity to Salesforce and SugarCRM. Addition of wizards and other user experience improvements. Enables clustered execution of jobs. New data visualization extension and a national identifier validation extension. Adds Pentaho Data Integration job scheduling and execution.
Release Notes: A Web service was added to the monitoring application for getting a (list of) metric values. The 'Table lookup' component has been improved by adding join semantics as a configurable property. The EasyDQ components have been upgraded, adding further configuration options and a richer deduplication result interface. Performance improvements have been a specific focus of this release. Improvements have been made in the engine of DataCleaner to further utilize a streaming processing approach in certain corner cases which was not covered previously.
Release Notes: The date and time related analysis options have been expanded, adding distribution analyzers for week numbers, months, and years. An optional "descriptive statistics" option has been added to the Number analyzer and the Date/time analyzer The lines in the timeline charts of the monitoring Web application now have small dots in them. Two new transformers have been added for generating UUIDs and for generating timestamps. Now ad hoc queries can contain DISTINCT clauses, *-wildcards, and subqueries, and are fault-tolerant towards text-case issues.
Release Notes: Data Quality KPIs can now be defined as formulas (mathematical expressions), not just raw metrics. It is now possible to fire ad-hoc SQL queries towards all datastores (DB, CSV, Excel, and more). A new analysis option, the Value matcher, was added. With this analysis, it's easy to identify unexpected values in a field. Management of jobs, including copying and deleting jobs, has been made a lot easier by exposing the functionality directly in the UI. It has been made possible to change historic data quality metrics in order to reposition results into the timeline.
Release Notes: Adds a service for renaming jobs in the monitoring repository. You can access this as a RESTful Web service or interactively in the UI. A Web service was added for changing the historic date of an analysis result in the monitoring repository. The Web application has been made compatible with legacy JSF containers. Caching of configuration in the Web application was greatly improved, leading to faster page load and job initialization times.