Archive for the ‘ViPR’ Category

Hard or Soft Storage? That is the Question and the Answer

Rodger Burkley

Principal Product Marketing Manager at EMC

There’s lots of press these days on Software-Defined Storage (SDS), Software-Defined Data Centers (SDDC), Server SAN’s, software-only virtual SANs, hyper-converged storage servers, storage appliances and the like. We’ve all been inundated with this new technology and architectural terms by bloggers, marketing mavens, PR, tradeshow signage, consultants, analysts, technology pundits and CEOs of new start-ups. As a blogger and marketing guy, I plead doubly guilty. But the emergence of SDS systems and SDDCs is real and timely. Definitions and differences, however, can be a tiny bit murky and confusing.

This enabling technology is coming to market just in time as today’s modern data centers, servers, storage arrays and even network/comm fabrics are getting more and more overtaxed and saturated with mega-scale data I/O transfers and operations of all types with all kind of data formats (i.e., file, object, HDFS, block, S3, etc.). When you add in the line of business commitments for SLA adherence, data security/integrity, compliance, TCO, upgrades, migrations, control/management, provisioning and the raw growth in data volume (growing by at least 50% a year) IT directors and administrators are getting prolonged headaches.

Against this backdrop, it’s no wonder that lately I’m getting asked a lot to clarify the difference between converged storage appliances, hyper-converged/hyper scale-out storage server clusters, and pure software-defined storage systems. So I wanted to make an attempt to provide a high level distinction between a storage hardware appliance and pure software-defined (i.e., shrink wrapped software) storage system, while also providing some considerations of choosing one over the other. In fact, architectural and functional differences are somewhat blurred. So it’s mostly about packaging…but not entirely.

Basically, we all know that everything runs on software – whether it comes pre-packaged in a hardware box (i.e., appliance) or decoupled as a pure software install. There are also distinctions being made between convergence (i.e., converged infrastructure) and hyper-convergence. Convergence refers to the extent compute, storage and networking resources have been “converged” into one virtual layer or appliance box.

Regardless of whether we’re talking about a converged or hyper-converged storage appliance based system using proprietary or COTS hardware, advanced/intelligent software is required in any box to run, monitor, control and optimize the resulting storage system. Some storage appliance vendors – including EMC – offer their “secret sauce,” software unbundled in a pure, software only version like ScaleIO and ViPR 2.0; Red Hat’s ICE (Inktank Ceph Enterprise) or VMware’s Virtual SAN. The main difference between hardware storage appliances and a pure software-defined storage system is chiefly how each is packaged or bundled (or not) with hardware. Some appliances may have proprietary hardware, but not all have to, and likewise not all appliances are commodity hardware based.

A hyper-converged box is typically a commodity based hardware appliance that has all three computing resource functions rolled up in one box or single layer. Traditional arrays consist of three separate layers or distinct functional components. Some commodity server based pure software-defined storage systems are also hyper-converged in that they are installed on application server hardware. Other converged systems (typically appliances) may consist of storage and networking in a commodity or proprietary box – for two layers. Converged and hyper-converged appliances and SDS systems, however, all typically aggregate pooled storage into one shared clustered system with distributed data/protection spread across appliance boxes or host servers/nodes. They tend to be storage device/hardware agnostic as well, supporting PCIe I/O and SSD flash media as well as traditional HDDs.

Appliance based solutions offer plug-n-play boxes with predictable scalability which can be quickly added for scale-out (more nodes) or scale-up (more storage capacity). Local area clusters can be created with data protection spread across multiple shared appliance boxes. Flash caching or storage pooling/tiering performance features enhance the overall user experience. Adding additional storage and compute resources is predictable in terms of incremental CAPEX outlays. There may be some constraints on scalability, performance, and elasticity capabilities but these restrictions may not be deal breakers for some use cases. ROBO, retail store outlets, SMBs and smaller datacenters, for example, come to mind where smaller capacity, defined hardware storage server appliances provide adequate converged resources. “Datacenter in a box” is often used by some appliance venders to position these smaller, geographically distributed deployments. It’s an apt sound byte. Other larger customers might simply add another box for more symmetrical storage, I/O performance, or compute. Either way, they get it all in a single unit.

hardorsoft2
In other cases, a pure software-defined storage solution or software-defined data center can be better. Why? Well again, use cases are a big driver. Optimal use cases for these commodity hardware based SDS systems include database/OLTP, Test/Dev, Virtualization and Big Data/Cloud computing, and where availability of existing commodity server resources for lower cost scalability is high. SDS systems like ScaleIO can be installed on commodity application servers and pool server DAS together to form a converged, aggregated shared storage asymmetric cluster pool with distributed data protection across participating servers. They do this while delivering huge performance (IOPS and bandwidth) and scale-out synergies realized from parallel I/O processing. In essence, a peer-to-peer grid or fabric is created from this software. Blogademically speaking, an SDS is analogous to an unconstrained system versus a contained appliance based solution. It goes without saying that both have their strong points.

Another aspect that comes into play is your tolerance for installation, integration, and deployment activities. Both hardware appliances and SDS systems have their strong and weak points in terms of the degree of expertise needed to get up and running. SDS systems can make that task set easier with their thin provisioned software module/installs, intuitive monitoring dashboard GUIs and/or English language based command interfaces. Thin provisioning available on appliances and some SDS systems offers more efficiency where the amount of resource used is much less than provisioned. This enables greater on-the-fly elasticity for adding and removing storage resources, creating snap shots as well as leveraging storage and resource costs in a virtual environment.

For some users, a commodity based converged or hyper-converged hardware appliance with internal hybrid storage (i.e., SSD Flash and HDDs) is the way to go. For others, it’s a pure SDS solution. Some customers and data center administrators favor a bundled, pre-packaged hardware appliance solution from vendors. They offer predictable resource expansion, performance and scalability as well as quick set-up and integration. A growing segment, however, prefers the ease of install: greater on-the-fly elasticity, lower TCO, hyper scale-out capacity/performance by simply adding more servers and DAS devices and reduced management overhead of an SDS solution. Thin provisioning allows space to be easily allocated to servers and on a just-enough and just-in-time basis for ease of scalability and elasticity.

In the end, the question of going hard or soft for converged or hyper-converged storage systems depends on your data I/O requirements, use cases, goals/objectives, existing resources and environment(s) and plans for future expansion, performance and flexibility. Both have utility.

Third Party Array Support in ViPR Through OpenStack

Parashurham Hallur

Parashurham Hallur

Parashurham Hallur

Latest posts by Parashurham Hallur (see all)

OpenStack, the truly open cloud operating system, has been evolving since 2010. Since its inception, there have been many projects contributing to the growth of the OpenStack ecosystem. One of them, project code name “Cinder,” provides block storage capability. Any vendor who would like to enable block storage capability for their proprietary storage system is developing the Cinder plugin or Cinder driver. Here is the list of vendors who have developed the Cinder plugin.

EMC ViPR – the first software-defined storage platform of its kind – has been in the news since 2013 and is continuously adding new features to keep it ahead of the competition. A distinguishing feature of ViPR is its ability to support multi-vendor storage systems. In order to extend support for many new storage vendors, ViPR is always looking to enhance its support matrix by adding new storage systems to the support list. Traditionally, adding new storage systems to the list means that EMC, as well as third party vendors are writing new native drivers. This requires a lot of man hours and effort. But what if they could leverage what exists already and save a lot of the time and effort? That would make a lot of sense, right?

Because of this, many storage vendors have embraced OpenStack and they have already qualified their storage to work in the OpenStack environment by developing a Cinder plugin/driver for their storage. While ViPR has had integration with OpenStack from it’s first release via the northbound integration to provision storage from OpenStack via ViPR (you can read all about it in my last blog post) , ViPR now leverages Cinder to extend third party array support and provide customers with extended capabilities across their storage environment. Today, starting with ViPR 2.0, the Southbound integration is now available and it provisions storage from ViPR via Cinder as opposed to what is supported in the ViPR Cinder Driver.

The following diagram helps you to understand the ViPR Southbound integration with OpenStack.

Hallur_Image_`
Essentially ViPR talks to OpenStack Cinder service by consuming the exposed Cinder REST API. ViPR discovers every Cinder backend configured in the cinder.conf on Cinder node as a storage system. Any volume type created for the backend will be discovered as a storage pool for that particular storage system. This means Cinder’s backend is modelled as storage system and volume type as a storage pool in ViPR. In a nutshell this is how ViPR integrates with OpenStack.

Now, you can ask me, “what is the advantage of such integration with OpenStack?”

I would answer that there are multiple advantages:

  1. ViPR’s Third party array support (multi-vendor) gets expanded.
  2. The moment a new vendor gets added to the list of Cinder plugins, ViPR gets on the fly integration of the new third party array. This means no further effort is required on the ViPR side to claim new vendor array support. Theoretically, any new Cinder plugin that gets added will have seamless support in ViPR too.
  3. If vendors running OpenStack would like to use ViPR to manage the storage for its enhanced feature set, then the transition is smoother as ViPR already talks to OpenStack. It just brings up ViPR and plugs it in with OpenStack.

Having understood the background of integration and some of the advantages of such integration, now let’s look at how to get the manageability of the third party array into ViPR through Cinder.

First of all we should have the OpenStack node running bare minimum services keystone and Cinder services. If you would like to have UI access to the OpenStack node, other services like Horizon could also be installed. One could choose to install complete OpenStack (with all services), but it is really not recommended. There is an OVA built with bare minimum services required for ViPR and OpenStack integration. You could use this or build a node using www.devstack.org or by following installation instructions at www.openstack.org. To understand complete details of how to manage the third party storage in ViPR through Cinder, please visit the ViPR Community.

With this solution, EMC has opened up wider options for storage management. ViPR has been creating great traction with customers and many partners are evaluating the ViPR and OpenStack solution. For a test drive you can download the EMC ViPR controller here. If you are looking to learn more or need help on how to deploy ViPR with OpenStack, feel free to drop me a note at Parashuram.hallur at emc.com. My team and I will help you cruise with ViPR.

You Think You Know Cloud Storage Gateways? Sync Again

Jeff Denworth

Jeff Denworth

Jeff Denworth

Latest posts by Jeff Denworth (see all)

*The following is a guest blog post, written by Jeff Denworth of CTERA

The cloud storage gateway market was particularly hot this summer, having experienced a flood of investment topping over $100M in both likely and unlikely places. CTERA was one vendor at the center of it all – as a provider of a cloud storage services platform that enables users to deploy cloud storage gateways, enterprise file sync and share and endpoint backup services from the private and virtual private cloud of their choice. CTERA alone secured $25 million in June, but we were not alone – our friend, EMC, for example, also got into the game by acquiring TwinStrata’s block storage caching gateway for their VMAX division. Fueled by customer demand and added investment, the cloud gateway market is undeniably hot, hot, hot! IT research analyst firm MarketsAndMarkets estimates that the Cloud Storage Gateway market will continue to grow at an average rate of 55% until 2019, representing a $5B market by that time.

Why cloud gateways, you ask? Well, there’s a variety of reasons.

  • By harnessing commodity storage technology and smart scalable software in the data center, new public and private cloud storage is redefining data center economics for primary storage and disaster recovery.
  • WAN bandwidth is now robust enough to where offices can now move substantial amounts of data to and from remote data centers using deduplication and compression to optimize efficiency and performance.
  • The combination of these two factors is enabling organizations to modernize how they deploy storage at the edge and eliminate some of the pains that customers had with managing storage across a dispersed enterprise.

As customers rush to find more modern solutions for branch office storage, they quickly learn that there are many approaches to cloud storage gateways. To reduce the confusion, I’ll try to illustrate the differences here in this blog.

Sync vs. Caching: Herein Lies the Fundamental Difference.
There’s effectively two schools of capacity and namespace management when it comes to the cloud storage gateway market, and the differences have significant impact on customer TCO.

Caching Gateways
Caching gateways are designed to host the authoritative storage volume in a public or private cloud data center while retaining frequently accessed data locally in a storage gateway. Intelligent metadata management enables these systems to present the full dataset (cloud-resident and locally-cached data) all as if it were all local data and buffer caches help to accelerate read and write operations despite using the WAN as a backplane.

Ctera_Image_1

Sync Gateways
Sync gateways leverage the cloud more as a disaster recovery target than a cannonical global namespace. In the case of a sync gateway, the authoritative volume lives at the edge, and through smart snapshot technology, snapshots are synced to the cloud.

Ctera_image_2

So… with all of this, how do you know when to use what? Here’s a simple rule of thumb…

  • Sync Gateways are often used to replace small-medium branch office storage. Think low-end enterprise NAS systems such as Microsoft Windows Storage Server Appliances.
  • Caching Gateways are often used to replace or accelerate enterprise NAS systems such as large scale monolithic or scale-out file systems.

So Where Does CTERA Come In?
CTERA and EMC are currently powering many of the world’s leading private cloud environments where organizations data security, data services versatility and the ability to enable a software-defined storage agenda. CTERA provides a Cloud Storage Services Platform that leverages next-generation object storage such as EMC ViPR and ATMOS along with EMC ECS Appliance to serve a number of enterprise data storage services from a central cloud services delivery system. With respect to CTERA’s gateway products, think of these as sync gateways that leverage EMC object storage by replicating data to a EMC storage cloud and by capacity-optimizing gateway snapshots to the cloud for gateway capacity optimization.

Ctera_image_3
Putting this all together – as you think of how to evolve your branch and remote office storage architecture, be sure to understand the data footprint. The below chart maps the cost of a sync gateway (blue line) approach to branch storage vs. a caching gateway (grey line). As you can see… there’s clear cost advantage when dealing with low data footprint in a branch office.

Ctera_image_4
In this case we present Microsoft’s StorSimple caching gateway as an example of how organizations can derive substantial savings for modest branch office storage volumes.

Of course, if you want to move 100s of terabytes or petabytes to the cloud, there’s certainly a very strong case for caching gateways. That said – regardless of you volume and performance requirements, customers today have many options when looking to leverage public, private and hybrid cloud storage to modernize branch office IT infrastructures.

Cloud terminology and concepts are evolving every day… and knowing your options is key to understanding how to best modernize your IT infrastructure and processes. CTERA and EMC ViPR / ECS Appliance is a proven, compelling and EMC-certified solution for organizations with many branch offices and sub-100TB requirements where the combined solution can help organizations achieve TCO reductions up to 80% vs. traditional branch storage offerings.

Here’s some additional resources for further education along your cloud explorations:

Accelerating Storage ROI with Storage Resource Management

Kevin Gray

Kevin Gray

Kevin Gray

Latest posts by Kevin Gray (see all)

Enterprise IT organizations are increasingly being called upon to help their companies drive revenue growth by processing and managing new data sources that increase business intelligence and enhance customer engagement. As a result, many of these organizations are experiencing data growth rates in excess of 20% to 60% per year. In addition, in order to improve efficiency, organizations are adopting virtualization and software-defined architectures that increase business agility and flexibility. By abstracting the hardware infrastructure, virtualization technology may obscure the relationships between application services and the underlying physical resources they consume. These new abstraction layers may make it difficult to optimize resources to meet service levels while controlling escalating storage costs.

While storage resource management (SRM) solutions have been around for over a decade, many enterprises are increasingly depending on SRM to gain insight into where and how capacity is being consumed. Storage resource management helps these organizations control costs as the size of their infrastructure grows. Understanding how SRM can help your business is essential to getting the most out of your investments in SRM and your storage infrastructure.

Three ways that Storage Resource Management can help you control costs are:

  1. Reducing capital investments
  2. Improving productivity
  3. Minimizing unplanned downtime and performance issues

Let’s take a closer look at these three areas in a bit more detail.

new_data
Reducing Capital Investments
Many of today’s large enterprises are managing multiple petabytes of storage with data growing 20% to 60% per year. According to Gartner estimates, the average cost per raw terabyte of enterprise storage last year was $3,212 (Source: Gartner, Inc. “IT Key Metrics Data 2014: Key Infrastructure Measures: Storage Analysis: Current Year,” December 16, 2013). This means these organizations are investing millions of dollars in new capital each year just to keep pace with data growth. Surprisingly, many of these organizations have low utilization rates and limited visibility into historical workloads that would enable more efficient storage tiering. Therefore, they end up purchasing significantly more capacity at higher prices than is needed to meet business requirements.

Storage Resource Management solutions like EMC’s ViPR SRM can help storage teams improve utilization rates by tracking storage consumption by service level to identify where, when, and what type of capacity will be required. This enables just-in-time purchasing processes to avoid over-purchasing capacity.

The capacity utilization rate is an important metric in assessing how efficiently your organization is using its existing capacity. It is the ratio between the capacity that is used vs. usable in your environment. In working with EMC customers, I’ve seen a wide range of utilization rates that span from a low of about 30% to a high of 80%. The average tends to be just over 60%, but many of our customers are at 50% or less. A utilization rate of 50% means that for every terabyte used, 2 terabytes were purchased. Increasing the utilization rate to 66% would mean that for every terabyte used 1.5 terabytes would be purchased. This equates to purchasing 25% less capacity to meet business requirements. For organizations with rapidly growing, multi-petabyte storage environments, improving utilization rates can save hundreds of thousands to millions of dollars a year in capital acquisitions.

SRM also allows storage teams to make more effective use of storage pools and thin provisioning. These technologies increase utilization rates by enabling storage teams to pool resources and allocate capacity on demand. SRM tracks consumption of these pools and estimates “time-to-full” to help storage teams ensure adequate capacity is always in place to meet business expectations. It also identifies over allocated pools where capacity can be reclaimed and repurposed to meet new requirements. And, it helps identify orphaned volumes or volumes and file systems with no activity that can be reclaimed and put to better use.

SRM can improve the management of a tiered storage infrastructure. Many companies have moved to tiered service offerings to help lower costs. Tier 1 storage typically uses more expensive disk drives with higher levels of replication to support data protection and performance requirements than Tier 2. SRM allows storage teams to understand historical workloads, identify replication relationships, track capacity consumed by a host, workgroup or application, and create and distribute chargeback or show-back reports that help align the cost of storage services with business objectives. This allows storage teams to better understand where Tier 2 resources can be deployed without impacting SLAs to reduce costs.

Improving Productivity
The capacity managed per storage FTE is another common benchmark used to assess how efficient your organization is at managing storage. While this value typically grows over time and varies depending upon the size of the environment, it is based on the notion that one person can only do so much. As the capacity grows, headcount must be added to support this growth assuming business as usual processes are followed.

Storage resource management can improve productivity by enabling storage teams to manage more capacity with the same resources as the size of their infrastructure grows. This avoids adding new, often hard to find, resources to support business requirements. SRM automates and simplifies common tasks like capacity, performance, and chargeback reporting, change tracking and configuration validation, performance reporting and troubleshooting. This frees up the storage team to focus on more value-added tasks like planning and delivering new storage services.

Minimizing Unplanned Downtime and Performance Issues
According to IDC, the cost of downtime for companies with more than 10,000 employees is over $1.5 million per hour (Source: IDC, “Measuring Cost of Downtime and Recovery Objectives Among US Firms,” July 2013). More difficult to assess, but clearly a problem for many organizations, is the impact of slow performance. Experience has shown that over half of SAN issues are due to configuration problems. Most organizations do their best to follow design best practices and comply with vendor support matrix recommendations. However, in environments that are growing in size and complexity, consistent adherence becomes increasingly difficult. Storage resource management solutions help reduce downtime by tracking compliance with design practices and support matrix recommendations to ensure your environment is always configured right to meet service levels. When something does go wrong, SRM can help reduce the mean time to problem identification by providing visibility into configurations, health, and performance across the data path.

So, Where Do You Start?
While there are significant benefits to be gained by adopting storage resource management, success is dependent upon understanding how people and processes will evolve when using SRM to derive these benefits. Starting off with an understanding of what you want to achieve and how SRM can help you achieve those objectives is essential to getting the most out of your investment. EMC has developed a process for working with its customers to evaluate their needs and understand how SRM and other software-defined solutions can help their customers reduce costs and develop a more agile environment. This process is called the Storage Transformation Workshop. The Storage Transformation Workshop helps customers assess their business priorities, identify potential solutions, and quantify the financial impact those solutions could have on their organization.

Storage resource management can help IT reduce costs while meeting SLAs as size and complexity grows. If you would like to find out more on how EMC ViPR SRM and software-defined storage solutions can help your organization, contact your EMC representative or click here to sign up for our upcoming webcast to learn more about our Storage Transformation Workshop.

ViPR Interactive Demo: Your Software-Defined Storage Playground

Hoc Phan

Hoc Phan

Hoc Phan

Latest posts by Hoc Phan (see all)

Since the announcement and release of EMC ViPR, you have probably seen many demos for the product. All are good and serve the purpose they were initially intended for, whether it was product training, live data use cases, or customer demos. However, all of those tools require a fairly significant footprint in order to be able to run them. What if you could experience the power of ViPR in a lightweight delivery? Now you can!

Introducing the ViPR Interactive Demo!
The beauty of the ViPR interactive demo is that it can be run on your laptop or tablet with a minimal tax on connectivity and storage. You don’t need to make a special request to access it, only then to be placed in queue for days. The data in the environment is sanitized directly from our two virtual data centers (both multi-million dollar investments). Leveraging an HTML5 front end and focusing on a select set of high priority customer use cases, this demo gives you the flexibility to improvise on your own or follow a scripted story line.

Let’s Get Started!
First, it is important to make sure that your browser accepts cookies and enables Javascript. The following browsers are supported:

  • Google Chrome (version 34 or later)
  • Mozilla Firefox (version 28 or later)
  • Internet Explorer (version 9 or later with Compatibility Mode turned off)

Once ready, please visit the EMC ViPR Demo Page and you will immediately see the screen below (without any login required!):

Hoc_image_1
Now you can access the ViPR user interface directly online without the need to deploy, configure, or request lab access. We built this demo as a playground for you. At a high level, this interactive demo can be used to explore ViPR’s concept of Software-Defined Storage.

The demo also provides a guided tour functionality. You can just click the Behind the Scenesbutton on the bottom right of the screen, at any time, and it will give you step-by-step highlights of different areas on the screen. This is especially helpful when you are not sure of what to do next.

hoc_image_2
What Are the Important Features?
You can review all how-to videos on the top navigation as well. Most importantly you could even discover new arrays and provision storage. Here is a suggestion of the flow you could go through:

  1. Add a Storage System
  2. Add a Virtual Array
  3. Add a Block Virtual Pool
  4. Select Catalog > View Catalog
  5. Go to Block Storage Services > Create Block Volume for a Host
  6. Fill out the form and click Order. It’s OK to select anything in the form.
  7. See the “magic” happen
  8. Order completed!

hoc_image_3

You don’t need to worry about damaging something as nothing will actually get provisioned in our data center. Everything in this demo only happens in your browser because, again, this tool is just a playground for you to experience the product.

You could also check out the ViPR integration with XtremIO. This is available in version 2.1. If you go to the Storage System page and select XtremIO, you can see the list of inputs required for discovery. In the production environment, the discovery adds the storage pools and storage ports to ViPR.

hoc_image_4
Last but not least, this demo also includes EMC ECS Appliance. There are two Virtual Data Centers (vdc1 and vdc2), each with one ECS Appliance. At a high level, there are one or more commodity nodes within a Virtual Array. A Virtual Data Center (VDC) is composed of one or more Virtual Arrays. One or more VDCs form a Virtual Pool, which serves bucket creation for object.

Please go to Dashboard > Commodity Nodes to see the setup. We have four nodes in this ECS (boston or vdc1). You can expand on a single node to get a quick view of disks and status. The admin can also view the services running on the node. In this case, the commodity node is running Object Service.

hoc_image_5
Click on sandy-boston node and the following screen opens up:

hoc_image_6
This page provides a quick overview of the commodity node health. The node is hooked to 15 disks. Disk status is listed to show capacity and status. You can also see two network interfaces which have IPv4 and IPv6 address assigned to the node.

ViPR Services is fully geo-capable. It replicates data both within and across multiple sites for resiliency against site failures. A bucket can span multiple sites so any data can be written to or read from any site. We suggest you to go to the Object Virtual Pools page and test out a creation of a virtual pool with virtual arrays on both vdc1 and vdc2. After that, switch back and forth between two VDCs to confirm.

Note that this demo may not cover all the use cases. For example, it does not let you select a specific physical array when you create a new virtual array. Although there are two tenants – Provider Tenant and RainPole Tenant – all the data is the same for those tenants.

After you are finished playing with this demo, you can get in touch with an EMC rep by clicking the Request a Quote button on top. Otherwise if you would like to deploy a trial version, please download the ViPR Controller vApp to continue your evaluation. We plan to start work on the rest of the interactive demo in Q4 to ensure you have something cool to show to your colleagues.

A Picture is Worth a Thousand Words

Gayatri Aryan

Gayatri Aryan

Gayatri Aryan

Latest posts by Gayatri Aryan (see all)

You know the old saying, “a picture is worth a thousand words”, well it’s true and even more so in highly virtualized environments.  Simply put, the ViPR SRM topology maps are just that.  The topology maps give you a pictorial depiction of an end-to-end relationship to provide storage administrators with the visibility they need to manage a complex, heterogeneous storage environment.  Consider a host for example, the topology map for which, in a single view, gives you an ability to traverse from the host, to its ports, to the fabric(s) it is connected to, all the way to the storage system.

Ayran_Image_1

There is more to this picture though.  It really becomes a navigation tool with the association of categories (outlined in red above).   Upon selection of the host, what is presented on the right hand side is the list of reports available for that device type (in this case the host).  As you select a different object (switch or array for example), a list of reports available for that device will be made available.  In essence, you get an ability to stay in the context of the object you started with while poking around the connected devices.

Everything mentioned above has been in ViPR SRM since the 3.0 release timeframe.  There have been enhancements made to the topology maps with ViPR SRM 3.5.1 that take topology maps to the next level.   For example, we introduced the concept of “map types”.  What we had until ViPR SRM 3.5 was the default view which is shown above – a physical connectivity view filtered by logical connectivity (masking views).  Starting with ViPR SRM 3.5.1, we have added two additional map types: Masked Storage Systems and Masked Storage Systems with Replicas.

The Masked Storage Systems – built upon only masking views is useful when only interested in provisioned storage.  An example of this view is shown below.

Aryan_Image_2

Masked Storage Systems with Replicas is built upon the Masking View and adds replicas as shown below.

Aryan_Image_3

In addition to having multiple perspectives for the topology map, we have also introduced some overlays.  For instance, if there are alerts on a given device (severe or high), an indicator would be shown on top of that device indicating as such.  In order to not clutter the topology map though, this indicator will be shown only if there is a severe or high alert for the device.   An example of the alert indicators is shown below.

Aryan_Image_4

Tooltips have gotten richer as well.  For example, hovering over the host icon, you would see some attributes for the host and a spark line for its CPU utilization as highlighted in red in the diagram below.  This provides storage admins with the ability to quickly scan the entire storage environment, hover over any alerts and be advised of any conditions that may require further attention or escalation.

Aryan_Image_5

For an array, you would see the aggregated port IOPs in addition to some vendor information as highlighted below in red, again enabling storage admins to quickly identify, isolate, and correct any issues that may be affecting performance or availability.

Aryan_Image_6

The ViPR SRM topology maps have been designed in a very deliberate effort to not clutter the topology map, instead offering the option to view the add-ons as needed.

In addition to the topology map enhancements, ViPR SRM 3.5.1 delivered new platform support for ScaleIO via a new ScaleIO SolutionPack, expanded visibility and reporting for HP 3PAR and IBM SVC environments.  The MySQL SolutionPack which was previously an add-on option is now included at no additional charge.

You can learn more about ViPR SRM by visiting our online community and/or registering for one of our Rethink Storage webcasts (live or on-demand).

We hope you agree that these product improvements proves our above assertion that a picture really is worth a thousand words!

Categories

Archives

Connect with us on Twitter