Data Services: Making Data More Mobile, Accessible, and Resilient (Part One)

Data Services: Making Data More Mobile, Accessible, and Resilient (Part One)

One of the biggest challenges DevOps engineers face is data gravity and the slow delivery of data. According to Digital Realty’s Data Gravity Index, the intensity of data gravity is expected to increase at a CAGR of 139% by 2024. Because of this, there has never been a more critical time to implement innovative data services solutions to combat data gravity slowing down your pipeline.

We need data services to drive more efficient processes, saving developers time on administrative tasks, and cutting time to deploy new applications and features. In this article, we’re uncovering what services will be required to make data more mobile, accessible, and resilient.

Adding Characteristics to Data to Maximize Accessibility

Data services are defined by RedHat as “self-contained units of software functions that give data characteristics it doesn’t already have.” We would go one step further and say that, “data services give data characteristics it didn’t already have and allow operators to manage those characteristics to optimize business outcomes.”

In the past, the number of data characteristics was limited to Location, measured by servo offset; Speed, measured in IOs per second; Availability, measured by recovery point objective, and Size, measured in (X)bytes. 

In a world where data was stored, backed up, and restored, concepts such as agility, accessibility, and resilience lacked relevance and were simply ignored.

With data growth burgeoning, and transformation to the cloud fully underway, we must instill, manage, and optimize a growing list of now highly relevant data characteristics. And thanks to researchers like Dave McCory, even advanced concepts such as data gravity, as measured by intensity, are rising to the fore.

This suggests that we now need data services that can make volumes more accessible, mobile, and resilient. Let’s consider a reinvention of the Location characteristic and its implications on modern data challenges. 

Reinventing the Location Characteristic of Data

Historically, data address referred to the physical Location denoted by the volume name and servo offset, e.g., Vol Work1, Offset 32. Moving data meant physically reading a number of blocks starting at Offset 32, transferring them to another volume, rewriting them there, and informing the application of the change — a time and resource-consuming process that contributes heavily to data gravity intensity.

ionir Data Services Platform for Kubernetes reinvents the Location characteristic of data. Rather than a physical address, ionir assigns a unique “name” to each set of data.  Much like the Unique Record Locator (URL) concept we use every day to access web information, ionir acts as a Dynamic Name Service (DNS) for data. Just as when you ask for a website, the physical location of the requested web page is irrelevant. Now, with ionir, the physical location of data’s 1’s and 0’s is also irrelevant. 

This enables developers to access data by name rather than location, allowing them to focus on application logic while ignoring the idiosyncrasies of data storage and management.

Perhaps more importantly, this reinvention of the Location characteristic allows for the instant relocation of the data access point without requiring the prior physical movement of the data. We can now move volumes across the world in seconds.

What’s Next?

In our next installment, we’ll consider the impact of creating a new data characteristic, the Time of data. Read part two now.

Want to see the efficiencies ionir can bring to your pipeline? Start your free trial of ionir today.

Spread the Word


More Recent Posts
  • Containerization already plays a strategic role in 20% of global organizations, and 70% are reporting plans to adopt cloud containers within the next 6-18 months. What makes a cloud container so useful in DevOps environments?

    Spread the Word
    Read More
  • Only a few organizations are currently offering products that natively handle Kubernetes environments and the scaling challenges that come along with them. Thus, it’s best to keep up with container trends so that you can

    Spread the Word
    Read More
  • Having the right set of tools to execute DevOps is crucial to ensuring a faster deployment pipeline. Ultimately, the key to implementing a highly effective toolchain is to automate data delivery and remove the data gap.

    Spread the Word
    Read More