15 Oct Data Services: Making Data More Mobile, Accessible, and Resilient (Part One)
One of the biggest challenges DevOps engineers face is data gravity and the slow delivery of data. According to Digital Realty’s Data Gravity Index, the intensity of data gravity is expected to increase at a CAGR of 139% by 2024. Because of this, there has never been a more critical time to implement innovative data services solutions to combat data gravity slowing down your pipeline.
We need data services to drive more efficient processes, saving developers time on administrative tasks, and cutting time to deploy new applications and features. In this article, we’re uncovering what services will be required to make data more mobile, accessible, and resilient.
Adding Characteristics to Data to Maximize Accessibility
Data services are defined by RedHat as “self-contained units of software functions that give data characteristics it doesn’t already have.” We would go one step further and say that, “data services give data characteristics it didn’t already have and allow operators to manage those characteristics to optimize business outcomes.”
In the past, the number of data characteristics was limited to Location, measured by servo offset; Speed, measured in IOs per second; Availability, measured by recovery point objective, and Size, measured in (X)bytes.
In a world where data was stored, backed up, and restored, concepts such as agility, accessibility, and resilience lacked relevance and were simply ignored.
With data growth burgeoning, and transformation to the cloud fully underway, we must instill, manage, and optimize a growing list of now highly relevant data characteristics. And thanks to researchers like Dave McCory, even advanced concepts such as data gravity, as measured by intensity, are rising to the fore.
This suggests that we now need data services that can make volumes more accessible, mobile, and resilient. Let’s consider a reinvention of the Location characteristic and its implications on modern data challenges.
Reinventing the Location Characteristic of Data
Historically, data address referred to the physical Location denoted by the volume name and servo offset, e.g., Vol Work1, Offset 32. Moving data meant physically reading a number of blocks starting at Offset 32, transferring them to another volume, rewriting them there, and informing the application of the change — a time and resource-consuming process that contributes heavily to data gravity intensity.
ionir Data Services Platform for Kubernetes reinvents the Location characteristic of data. Rather than a physical address, ionir assigns a unique “name” to each set of data. Much like the Unique Record Locator (URL) concept we use every day to access web information, ionir acts as a Dynamic Name Service (DNS) for data. Just as when you ask for a website, the physical location of the requested web page is irrelevant. Now, with ionir, the physical location of data’s 1’s and 0’s is also irrelevant.
This enables developers to access data by name rather than location, allowing them to focus on application logic while ignoring the idiosyncrasies of data storage and management.
Perhaps more importantly, this reinvention of the Location characteristic allows for the instant relocation of the data access point without requiring the prior physical movement of the data. We can now move volumes across the world in seconds.
In our next installment, we’ll consider the impact of creating a new data characteristic, the Time of data. Read part two now.
Want to see the efficiencies ionir can bring to your pipeline? Start your free trial of ionir today.