Request a Demo

Three Techniques for Improving Analytics ROI in the Cloud

How to Optimize Your Platforms and Processes

The ability to turn data into actionable insights presents the opportunity to make business decisions that drive more revenue and control costs. Collecting and then analyzing data from numerous sources, across disparate platforms, presents a multitude of challenges. But successfully overcoming these can be the difference between a good business and a category leader.

Businesses today are investing heavily in becoming truly data-driven, which today means serving the voracious appetite of the business analyst. The ability to collect and store data from millions of customer transactions and activities is the first piece – the next crucial step is to serve your organization’s data consumers, enabling them to easily and consistently access this data to quickly derive insights and create forecasts that drive revenue-generating decisions.

With all of this investment in analytics, particularly cloud analytics, optimizing your platforms and processes for maximum ROI is critical. This means managing cloud costs of course, but also can be measured in improvements in time-to-insight/time-to-decision as well as team productivity.

We’ve identified 3 critical things that every business should consider when running cloud analytics to serve the data consumers in an efficient way: the right technology, monitoring usage, and continuous improvement.

Choosing The Right Technology

Choosing The Right Technology

Enterprises face challenges of overcoming the limitations of existing legacy technology and providing the performance necessary to drill into data at scale. The full analytics stack relies on three components:

  • A data warehouse that can support the capacity demands of the business
  • A modeling platform to provide consistent data definitions analysts can use to drill into data
  • A visualization tool to derive the insights that are ultimately used to make business decisions.

The first step in choosing the right technology is to establish the goals of your organization and answering the question: What are the business outcomes that you’re trying to achieve? You may be focused on being data-driven at scale to leverage rapidly growing data volumes, reducing reliance on IT in order to accelerate reporting and time-to-insight, or enabling granular drill-down analysis capabilities to better identify business opportunities – or a combination of these.

With your goals established, it’s important to define your technology evaluation criteria. Criteria should drive performance and ultimately our business goals. Examples of these are the capacity of the platform, usage of the platform, and the compute cost of the dashboard or data model.

You should be able to measure the results of your technology choices as well, using KPIs such as the number of data requests being managed across teams, query performance, and number of data sources being accessed regularly.

Monitoring Technology and Usage

Monitoring Technology and Usage

Adopting the right cloud technology offers a tremendous opportunity for both cost savings and performance scalability. However, if the technology is used without oversight, there is a very good chance that performance expectations will not be met and unpredictable costs will erase any of the cloud’s value. This is why it’s incredibly important to implement a monitoring framework to get (and keep) your BI stack in shape.

Performance bottlenecks occur when resources exceed thresholds at peak loads and user concurrency results in queuing. To identify resource utilization bottlenecks you can set up very detailed, real-time monitoring of our systems. Metrics to track include CPU, memory, disk I/O, network traffic, and query response times.

One of the most common bottlenecks is user request queues. This can be overcome with small configuration changes in the data platform. In cases where tuning of the existing environment isn’t enough, the next option is to scale horizontally with more machines or vertically with more powerful machines. This is always the second option though, as scaling machines is never free!

Without this depth of monitoring, costs can quickly get out of control. Cloud platforms have different pricing models and you need to optimize for your situation. For example, on-demand pricing allows a customer to pay as they go based on the amount of data processed, while flat rate pricing has a fixed fee for guaranteed processing capacity. Other pricing models charge based on data volume or size and number of warehouses and clusters.

Closely monitoring usage will help you understand which pricing model will work best for you. Once you understand how users are querying data, you may decide that fixed rate pricing works better for you than on-demand, or vice versa.

You can get as nuanced as establishing fixed capacity for most of the time and paying for flex-capacity during predictable peak usage times. For example, fixed capacity may serve you well during most of the week but shifting to flex-capacity during times where processing demand increases regularly, like on Mondays when the business is updating reports and dashboards. This helps you avoid overpaying for larger fixed-capacity that won’t be used the majority of the time.

Continuously Improve Your Environment

Continuously Improve Your Environment

With the right technology and the proper monitoring in place, it’s time to improve outcomes of the investment. Improvement should be a never-ending process. There are a number of initiatives that can make a world of difference for performance such as adjusting filter settings in a dashboard, updating a data model, improving data preparation, and code rewriting. Fortunately, you will find the answers for where to focus in the logs.

The logs are a record of what users are experiencing and the impact those experiences have on a technical environment. To improve return on investment, it’s important to map the logs to the drivers of performance and cost so that you can optimize KPIs like compute costs, query performance, and cost per query.

The easiest way to interpret logs is by visualization. An efficient way to do this is to export all of our logs, load them into your data platform, and query the logs for analysis. You can then visualize that analysis in meaningful depictions like box plots and scatter plots to help identify areas of improvement. One caveat: be careful about using averages as they don’t provide a good depiction of performance.

Putting it all together

Every company has more data than they know what to do with. The issue is that most companies don’t know how to use it. Establish a strategy to choose the right technology for your business, monitor that technology to make sure your business is realizing the value of the investment and then improve upon that technology by understanding how it’s being used by your team. When you’re able to institute these three techniques you’ll take your business from being data conscientious to data-driven.

How AtScale can Help Improve your Cloud Analytics ROI

Once you’ve decided on the right cloud platform for your business, the right pricing model to go with, and put processes in place for monitoring usage and implementing continuous improvement, there is still more you can do to make all of it work even better.

AtScale offers a modern approach to business intelligence and analytics in the cloud. AtScale’s Cloud OLAP enables analysts to build centralized data models, perform sub-second, multidimensional analysis with popular BI tools on any cloud data platform.

Including AtScale in your analytics stack improves the performance of any cloud data platform you choose, improving query performance, reducing compute usage, and improving time-to-insight from weeks to days.

Enterprises rely on AtScale to overcome data and analytics challenges including: accelerating data-driven decisions at scale, creating one compliant view of business metrics and definitions, controlling the complexity and costs of analytics and reducing the risk of analytics.

AtScale Product Overview

AtScale provides the premier platform for data architecture modernization. AtScale connects you to live data using one set of semantics without having to move any data. Leveraging AtScale’s Autonomous Data EngineeringTM, query performance

is improved by order of magnitude. AtScale inherits native security and provides additional governance and security controls to enable self-service analytics with consistency, safety and control. AtScale’s Intelligent Data VirtualizationTM and intuitive data modeling enables access to new data sources and platforms without ETL and or needing to call in data engineering.

More Great Content


AtScale powers the analysis used by the Global 2000 to make million dollar business decisions. The company’s Intelligent Data Virtualization platform provides Cloud OLAP, Autonomous Data Engineering™ and a Universal Semantic Layer™ for fast, accurate data-driven business intelligence and machine learning analysis at scale. For more information, visit