Close

Request a Demo

What is Data Extraction?

Data Extraction

Data extraction is the process of retrieving data from data sources for further data processing or storage.

Data extraction is perhaps the most important part of the Extract/Translate/Load (ETL) process because it inherently includes the decision making on which data is most valuable for achieving the business goal driving the overall ETL.

Sometimes data is relatively easy to extract because it exists in a structured data store such as a relational database management system (RDBMS) in which case, there is a well defined standardized Structured Query Language (SQL) that is very powerful for doing targeted extracts of exact data. SQL may also be used to do some level of translation/transformation making it even more powerful.

In other cases, the data exists in a non-SQL based data store or even spread across many different digital, and potentially non-digital, formats. In this case, more exotic tools or bespoke code is required. Unstructured Data Extraction generally makes the projects longer so the general rule of understanding the value of the data you are going to extract is even more important.

Streaming Data Pipeline vs. Batch Data Extraction

Another consideration in the Extraction phase is the velocity of data. In some cases data is continuous, meaning new data elements are arriving on a regular basis. This is sometimes referred to as a Streaming Pipeline of data and more applied to structured data. Streaming data use cases exist in all industries and are often employed for workloads in IOT, finance (Fraud Detection), Security monitoring, Healthcare, Advertising, etc.

The more prevalent mode of data extraction is batch. Batch Extraction refers to a defined process running on a time interval. This discrete execution of the extraction process can approximate a streaming use case by running quite frequently. Typically, the majority of current data freshness requirements are in hours or minutes, not seconds or real time, so batch is the overwhelming majority of implementations. Due to popularity, more infrastructure and tools exist in the batch extraction space from established enterprise vendors such as Informatica, while many of the streaming tools such as Kafka originated in the Open Source.

Why is Data Extraction Important for Business?

Decisions at this point in the pipeline can and will heavily influence the viability of downstream use of the data, so it is critical to always identify your business goal and use a combination of human analysis and machine learning to identify the correlation between the source data you are extracting and the analysis/decisions you are hoping to optimize. An example of a way to accomplish this is by using the Agile method of running a Spike Solution (a simple program that explored potential solutions) to ensure the data you are investing in extracting is appropriate for the use case.

The Challenges of Data Extraction and How AtScale Can Help

We talk a lot about the challenges of data extraction. Once you start moving data is when everything falls apart. Engineers are needed to create complex data pipelines for moving and transforming data and security and control of data is lost. Re-engineering and database modeling is required to incorporate new data sources, and this can take months. Data also required pre-aggregation to make it fit into a single data warehouse, meaning that users lose data fidelity and the ability to explore atomic data.

AtScale eliminates these challenges by virtualizing the data, and allowing data to be queried in its native platform, with no data movement. By not moving data, all of an enterprise’s data can be leveraged without the data extraction challenges of traditional approaches to data warehousing.

More Articles

What is Data Loading?

Congratulations! You have made it to the final step of your ETL integration journey (or if you chose to follow the ELT path, you are now embarking on your second step). Data Loading, Defined  Data loading (the “L” in “ETL” or “ELT”), is quite simply the process of packing up your data and moving it to a designated data warehouse. It is at the beginning of this transitory phase where you can begin planning a roadmap, outlining where you would like to move forward with your data and how you would like to use it. Where is the final destination?…

Read More

ETL vs ELT: What’s the Difference? & How to Choose

Data Loading, part of the ETL trinity, is conceptually very easy. Where it gets interesting is – at what point do you do the loading? Pre or Post the Transformation, where data is transformed from one format or structure to another. ETL refers to the process of Extracting data from numerous source platforms, both relational and otherwise, applying Transformations – either within or outside of the target system – and Loading that potentially transformed data into a target data warehouse for consumption by analytics and visualization. The interesting bits of Loading are decisions around whether to load and then Transform,…

Read More