The technology drivers for private clouds and public clouds overlap to a great extent – agility, standardization, elastic resource provisioning, and overall improvement in the effectiveness of the infrastructure resources to name a few. However, when it comes to data storage there is a great degree of divergence between private and public clouds. Storage is different from compute resources in that it involves longer term non-volatile preservation of state, as opposed to the temporal, provisioned state.
Public clouds are used by a wide variety of customers who have a broad range of requirements and application profiles. There is little sense in charging customers for storage as a single service due to the diversity of requirements. This, along with the motivation of public cloud vendors to better monetize resources means that public cloud providers have multiple storage service offerings having different characteristics, which have different features and cater to different needs, and charge for each service individually. Block storage, file storage, object stores and other services are made available in separate pools that have little or no synergy between them. It would rarely be the case that a single customer would consume all service types.
Private clouds on the other hand, pose an entirely separate range of challenges. An enterprise that owns, operates, and consumes all of the resources in a private cloud will focus mostly on overall efficacy and global optimization of resources. Having to manually configure which individual services to use for an application is unnecessary overhead, and breaking down the infrastructure to individually managed pools has a negative impact on the efficiency of resources and increases complexity. Infrastructure and application management needs to be based on QoS, classes of service, and API based management, while everything else needs to be automated and globally optimized.
Public cloud vendors have generally succeeded by creating a variety of services which align with their objectives, while the needs of the growing private clouds have largely not been addressed. Lacking a better storage solution, private cloud implementations either deploy legacy storage solutions which are alien to clouds, or sometimes will try to imitate the public cloud model of different platforms for different services, both leading to suboptimal efficiency, inefficient resource utilization, and higher management overhead.
Ionir for Private Clouds
A storage solution that is optimized for a private cloud must be capable of accumulating the resources which exist within the cloud such as SSD, SCM, and other locally-attached media resources, as well as other resources from the datacenter (object store/archive), and transform it into a single enterprise wide storage and data management platform
Such a solution needs to provide:
- High availability, with no single point of failure
- Multiple resiliency schemes for different media types to balance performance and efficiency
- Smart, fine grained, perpetual tiering based on real time heat statistics that is media aware that along with resiliency schemes allows for multiple classes of storage
- Multiple access methods and logical presentations of data
- Advanced data management services such as fine grained point in time snapshots and clones, and built-in disaster recovery
- Virtually infinite scalability of capacity, bandwidth and IOPs while maintaining elasticity
- Enterprise grade performance with latency equivalent to or surpassing monolithic hardware arrays
- Mobility of applications and data
- Support for containerized and legacy (virtualized) workloads
This is the driving principle behind Ionir’s container native storage and data management platform. Kubernetes provides all the infrastructure to implement our storage platform and provides a number of features that can be leveraged to implement functions that have required proprietary implementations in traditional and legacy storage solutions. These include, among others, co-locating helper processes, distributing secrets, software component health checking, replicating software components instances, horizontal auto-scaling, naming and discovery, load balancing, rolling updates, resource monitoring, log access and ingestion, support for introspection and debugging, identity and authorization, and flexibility in data representation.
Relying on Kubernetes to provide the above functionality combined with our unique metadata architecture allows us to implement our storage stack as a set of containerized services. Data resiliency and management for different tiers is handled by separate microservices that implement resiliency schements optimized for each tier, including distributed erasure coding. Tiering between local and remote media could be fully automatic, based on real time statistics and cloud wide (or, at least, cluster wide) heuristics, saving the need for manual decisions about data placement.
Challenges and Innovation Focus
While much of the control functionality required is based on mechanisms native to Kubernetes, and some of the data path functionality is implemented using well known methods and mechanisms, significant innovation was required to make the system a reality.
First, the system needed to be broken into components that enable horizontal scaling without adding much network traffic while allowing independent scaling in different axes (IOPS, bandwidth, capacity). The protocols used to communicate between such components were carefully designed to support these goals. Another challenge was the fine granularity of nodes in a cloud deployment. While traditional storage arrays rely on hardware resources such as common RAM caches backed up by batteries that support large, monolithic systems, it is impractical to include such hardware in each cloud node. Instead, we developed an architecture that is able to effectively take advantage of resources that are more likely to be included in commodity compute nodes, such as RAM in the nodes, locally attached storage class memory and SSDs.
Finally, our innovation focused on the metadata architecture, its associated data structures and the algorithms derived from the structures. The structures are designed to abstract data and present it in an access method independent manner. Lock-free, and to the extent possible synchronization free data structures enable massively parallel, highly concurrent, scalable operation. And to enable mobility and application access to data regardless of location, data addressing and the implementation of all data management operations refer to all data within the platform by what it is, rather than by where it is.
A Consistent Cloud Across Private and Public Clouds
As adoption of Kubernetes accelerates and hybrid and multi cloud deployments become the standard for IT clouds, the Ionir platform is an ideal solution to meet the storage and data management needs of an enterprise. Within the context of an enterprise, the Ionir platform deployed in both the public and private clouds enables a consistent IT environment with a common set of management workflows reducing overall IT costs and complexity, and increasing flexibility in provisioning and deployment of resources. The instant mobility of applications and data enabled by Data Teleport™ maximizes agility, helps simplify operations, and helps to unify all the IT resources in a single global pool.