Enterprise Data Virtualization Leader AtScale Adds Sophisticated Time-Series Analysis with 2019.2 Platform Release
*2019.2 accelerates time-to-insight and self-service analytics through interactive, large-volume BI and AI/ML queries without the need for data movement *
SAN MATEO, Calif. & BOSTON – July 17, 2019: AtScale, the data warehouse virtualization company, today announced its new 2019.2 platform release. The latest release augments AtScale’s autonomous data engineering innovations with the introduction of a sophisticated time-series and time-relative analysis capability for large volumes of data across disparate databases and platforms. This new capability enables data analyst and data science teams to have unencumbered access to large volumes of dispersed operational time-series data. Data consumers can quickly query and configure data for their specific business definitions using the business intelligence (BI), artificial intelligence (AI) or machine learning (ML) tools of their choice.
AtScale enables scalable and governed self-service analytics, utilizing its native security and performance functionality, with no need to move data or perform memory limited operations.
Leveraging the high performance optimization technologies developed by AtScale for distributed shared-computing platforms such as Hadoop, AtScale continues to make significant advancements in the virtualization of data platforms, now including Teradata, Oracle, Snowflake, Redshift, BigQuery, Greenplum, and Postgres.
“We’ve built on over 150 combined person-years mastering the challenges of on-premise and cloud data platforms to reinvent how enterprise teams drive performance for multidimensional analytics,” shares Matthew Baird, co-founder, and CTO of AtScale. “For companies to manage big data at the scale, complexity and security enterprises require, AtScale has completely reinvented how analytical queries are answered, taking full advantage of the various platforms’ native optimizations.”
AtScale’s 2019.2 release solves pervasive data management issues plaguing enterprise analytics and data science teams who spend 90 percent of their work week on data-related activities, with searching for and preparing data as the most common tasks for data engineering teams (IDC). Data workers waste over 40 percent of their time every week and are unsuccessful in their activities, juggling four to seven different tools to source, query, model and analyze data.
To solve for these issues and automate the design of intelligent and secure data structures, AtScale applies expert systems and machine learning algorithms, including:
- Augmented semantic intelligence through time-series and time-relative analysis of large volumes of disparate data sets—a process that could take hours, days or was previously impossible for most enterprise teams and includes:
- Sourcing, comparing and modeling live data using time-based calculations, hierarchies, semi-additive metrics, multi-level measures, and many-to-many relationships, through a single, collaborative web interface available as virtual cubes in AtScale’s Design CanvasTM.
- Standardized time-relative analytics for MDX and languages giving sophisticated semantics for period over period comparisons across arbitrary definitions of time. The resulting models define data relationships and semantic meanings which enable business analysts and data scientists to operate within a shared data context, without having to know or manage the underlying complexity.
- Intelligent federation (or integration) of deep data access and querying to the most suitable database regardless of its location, which enables complex time-based analysis to be performed at scale.
- Sophisticated query build-outs against the underlying data stored in the database with no extraction or in-memory data manipulation required.
- Improved high-speed query performance, capturing and detecting query patterns and behaviors automatically tuned to speed up performance and manage cost and load of the underlying data platforms. The combination of human signals and machine learning generate new metadata and relationships, augmenting the AtScale Virtual Cubes with new dimensions and measures for users to explore.
- Added security capabilities including Tableau Server Impersonation, enabling user authentication over the AD/LDAP domain as well as the database system of origin; this enables administrators to implement and control security policy in one place, simplifying deployment, and removing vulnerabilities associated with replicating security and data entitlements.
AtScale’s 2019.2 platform release comes on the heels of a record-breaking 125 percent increase in new business during the first quarter of its fiscal year 2019, driven by unrivaled innovation enabling seamless migration to any cloud data platform without disrupting BI, AI and ML applications. By virtualizing data warehouse infrastructure, AtScale creates a single, secure view of an enterprise’s analytical data with rich context and metadata for consumption by any BI tool.
To learn more about AtScale’s revolutionary approach to data warehouse virtualization, read our White Paper: Cloud Transformation: The Next Virtualized Data Frontier for BI, ML, and AI here.
The Global 2000 relies on AtScale, the data warehouse virtualization company, to provide unified, secured and governed access to data wherever it resides. The company’s Universal Semantic Layer™ virtualizes data across disparate systems bridging the gap for business intelligence and machine learning, enabling faster and more accurate business decisions at scale.
For additional information, visit www.atscale.com.