Posts Tagged ‘storage management’

Converged Infrastructure + Isilon: Better Together

David Noy

VP Product Management, Emerging Technologies Division at EMC

You can’t beat Isilon for simplicity, scalability, performance and savings. We’re talking  world-class scale-out NAS that stores, manages, protects and analyzes your unstructured data with a powerful platform that stays simple, no matter how large your data environment. And Dell EMC already has the #1 converged infrastructure with blocks and racks. So bringing these two superstars together into one converged system is truly a case of one plus one equals three.

This convergence—pairing Vblock/VxBlock/VxRack systems and the Technology Extension for Isilon— creates an unmatched combination that flexibly supports a wide range of workloads with ultra-high performance, multi-protocol NAS storage. And the benefits really add up, too:

As impressive as these numbers are, it all boils down to value and versatility. These converged solutions give you more value for your investment because, quite simply, they store more data for less. And their versatility allows you to optimally run both traditional and nontraditional workloads These include video surveillance, SAP/Oracle/Microsoft applications, mixed workloads that generate structured and unstructured data, Electronic Medical Records and Medical Imaging and more – on infrastructure built and supported as one product.

With a Dell EMC Converged System, you’ll see better, faster business outcomes through simpler IT across a wide range of application workloads. For more information on modernizing your data center with the industry’s broadest converged portfolio, visit emc.com/ci or call your Dell EMC representative today.

 

Learn more about Converged Infrastructure and IsilonAlso, check out the full infographic

EMC’s Commitment to Everything Software-Defined

Varun Chhabra

Director of Product Marketing, Advanced Software Division at EMC

At EMC, our commitment to creating new solutions for software-defined storage is part of our much larger commitment to supporting the entire software-defined data center infrastructure, in which software, completely abstracted from hardware, enables more adaptive, agile operations. Within the software-defined data center, EMC’s evolving suite of software-defined storage solutions plays an important role in addressing the explosive data growth – both in the volume and variety of data — that poses such a tremendous challenge today. We’ve designed these solutions with features like elastic scale-out to incrementally add storage capacity, open APIs for programmatic flexibility and support for analytics-in-place workloads. With software abstracted from hardware, customers can deploy these and other storage capabilities on the hardware of their choice rather than being locked into a narrow proprietary hardware platform, which means vendor flexibility, lower acquisition costs and more efficient storage provisioning for lower TCO over the long term.

In recent years, EMC has beenCommitment to SDS leading the way in introducing new software-defined storage platforms as well as working to transition our existing industry-leading storage solutions into the software-defined model. We entered the software-defined storage market in 2013 with ViPR Controller, which automates storage provisioning to reduce manual tasks and improve operational efficiency by up to 63%. It delivers storage-as-a-service to consumers, minimizing dependencies on the IT team. Since then, we’ve doubled down on our commitment to providing customers with a comprehensive software-defined storage portfolio. We’ve launched ScaleIO, a server-based storage area network (SAN) with a wide variety of deployment options – available as software on commodity hardware, as an appliance (VxRack™ Node) and as VxRack converged infrastructure from VCE (VxRack Flex System) that can linearly scale performance to thousands of nodes in a single federated cluster. On the cloud/object storage front, we’ve launched Elastic Cloud Storage, or ECS, a software-defined cloud storage platform that is built specifically for web, mobile and cloud applications, designed to run as a software-only solution on existing or commodity hardware. ECS scales effortlessly, and provides benefits such as superior economics and global access associated with the public cloud, while minimizing data residency and compliance risks. Both ScaleIO and ECS are also available for consumption as appliances or as software-only solutions.

Moreover, our software-defined products have very tight integrations with other EMC products. For example, our customers can use ScaleIO in conjunction with EMC XtremCache for flash cache auto-tiering to further accelerate application performance. And those who seek advanced-level protection and recovery for their confidential data can use ScaleIO with EMC RecoverPoint to provide replication and disaster recovery protection in ScaleIO environments.

We also made our EMC Isilon storage family, which has long provided industry-leading scale-out storage for unstructured data, available as a software-only solution. Available now, the Software-defined EMC Isilon (IsilonSD Edge) provides the same ability to manage large and rapidly growing amounts of data in a highly scalable and easy-to-manage way, but with the added benefit of hardware flexibility. Customers can deploy IsilonSD Edge on commodity hardware and easily manage enterprise edge locations including remote and branch offices, replicate the edge data to the core data center and seamlessly tier to private or public clouds.

As our customers move into the new world of software-defined IT, EMC provides a solid base on which to build the scalable, flexible infrastructures that will transform your data centers to meet the future head-on. Our growing portfolio of software-defined storage solutions is a fundamental component of that base, providing a range of scale-out solutions to meet rapidly growing and changing data demands.

To keep up with more EMC SDS information and trends, visit: www.emc.com/sds

 

Three Key Observations From the Gartner Data Center, Infrastructure and Operations Management Conference

I was fortunate enough to be part of the team that supported the EMC presence at the recent Gartner Data Center, Infrastructure and Operations Management Conference in Las Vegas earlier this month. Lots of hard work (briefings, meetings, staffing the expo booth) but also a great opportunity to speak with users and customers, as well as garner some interesting insights from the Gartner analyst-presented sessions.
DSCN0320

So what were some of the key themes I observed? First, the software-defined data center is moving a lot closer to reality for a lot of attendees. Key technologies such as software-defined storage and software-defined networking have moved for most from the “I’ll keep my eyes on it” bucket in 2014 into the “I’ve got to do something about this in 2015” bucket. That’s no surprise to our team; we’ve been observing a lot of the same behavior in our interactions with customers at places like executive briefings and user-group meetings. And it helped drive a lot of the insights we presented in our event-sponsor session on“Making the Software-Defined Data Center a Reality for Your Business,” in which the need for automation, especially at the management and monitoring level, was emphasized as a critical requirement to delivering on the promise of the software-defined data center.

Another key theme that had almost everyone talking was a notion of “bi-modal IT,” in which IT operations would simultaneously support both an agile, devops-like model for rapid iterations and deployment of newer applications and services, while also maintaining a “traditional” IT operations model for more traditional, less business-differentiating applications and services. In some ways, analysts had been alluding to this for years – devops was coming; it would be a major influential force; prepare for it. What was lacking was the “how,” and that confused and even scared people. But now at this event we learnedanalysts are saying to support both models (hence “bi-modal” IT), and, more importantly, deploy supporting systems and tools for each – and absolutely don’t try to use one system for both models (because nothing is out there that can do that effectively). Folks I spoke to almost had a concurrent sense of relief: Two modes, each with their own tools and systems, makes sense to everyone, and eliminates that angst associated with potentially trying to make the round peg fit in a square hole. And since it came from this event, it has the inherent “validation” that many in upper management want.

DSCN0314 Building on this, the third theme I noticed (more from my interactions with other conference attendees, especially at the EMCexpo booth) was a strong interest in continuous availability of applications and systems, rather than in backing up and being able to recover these same environments. People were asking the right questions: For example, what kinds of storage architectures make sense in a continuous-availability model, and can those be aligned with changing data needs? (Yes, and EMC has a lot to offer on this front.) What are the key elements of a monitoring system that focuses on continuous availability? (One answer: automated root-cause and impact analysis, which radically shrinks time needed to identify problems, and is a key capability in the EMC Service Assurance Suite.) And can a server-based SAN play a role in continuous availability architecture? (Absolutely – as long as you’re managing it with EMC ScaleIO.)

And this event also had its share of the unexpected (the Las Vegas strip was fogged in – yes, that’s not afoggy_lv typo – for almost two full days), as well as lighter fun-filled moments (EMC’s arcade-themed hospitality suite for conference attendees, complete with a customized Pac-Man-like game called “ViPR Strike). And as always, it’s the discussions and interactions that I cherish and remember the most.

Which brings it back to you: Were you at the conference too? If so, what do you think of these higher-level observations of mine? What else do you have to add or share? Even if you didn’t go, what are your thoughts and opinions on what you’ve read here?

Hard or Soft Storage? That is the Question and the Answer

Rodger Burkley

Principal Product Marketing Manager at EMC

There’s lots of press these days on Software-Defined Storage (SDS), Software-Defined Data Centers (SDDC), Server SAN’s, software-only virtual SANs, hyper-converged storage servers, storage appliances and the like. We’ve all been inundated with this new technology and architectural terms by bloggers, marketing mavens, PR, tradeshow signage, consultants, analysts, technology pundits and CEOs of new start-ups. As a blogger and marketing guy, I plead doubly guilty. But the emergence of SDS systems and SDDCs is real and timely. Definitions and differences, however, can be a tiny bit murky and confusing.

This enabling technology is coming to market just in time as today’s modern data centers, servers, storage arrays and even network/comm fabrics are getting more and more overtaxed and saturated with mega-scale data I/O transfers and operations of all types with all kind of data formats (i.e., file, object, HDFS, block, S3, etc.). When you add in the line of business commitments for SLA adherence, data security/integrity, compliance, TCO, upgrades, migrations, control/management, provisioning and the raw growth in data volume (growing by at least 50% a year) IT directors and administrators are getting prolonged headaches.

Against this backdrop, it’s no wonder that lately I’m getting asked a lot to clarify the difference between converged storage appliances, hyper-converged/hyper scale-out storage server clusters, and pure software-defined storage systems. So I wanted to make an attempt to provide a high level distinction between a storage hardware appliance and pure software-defined (i.e., shrink wrapped software) storage system, while also providing some considerations of choosing one over the other. In fact, architectural and functional differences are somewhat blurred. So it’s mostly about packaging…but not entirely.

Basically, we all know that everything runs on software – whether it comes pre-packaged in a hardware box (i.e., appliance) or decoupled as a pure software install. There are also distinctions being made between convergence (i.e., converged infrastructure) and hyper-convergence. Convergence refers to the extent compute, storage and networking resources have been “converged” into one virtual layer or appliance box.

Regardless of whether we’re talking about a converged or hyper-converged storage appliance based system using proprietary or COTS hardware, advanced/intelligent software is required in any box to run, monitor, control and optimize the resulting storage system. Some storage appliance vendors – including EMC – offer their “secret sauce,” software unbundled in a pure, software only version like ScaleIO and ViPR 2.0; Red Hat’s ICE (Inktank Ceph Enterprise) or VMware’s Virtual SAN. The main difference between hardware storage appliances and a pure software-defined storage system is chiefly how each is packaged or bundled (or not) with hardware. Some appliances may have proprietary hardware, but not all have to, and likewise not all appliances are commodity hardware based.

A hyper-converged box is typically a commodity based hardware appliance that has all three computing resource functions rolled up in one box or single layer. Traditional arrays consist of three separate layers or distinct functional components. Some commodity server based pure software-defined storage systems are also hyper-converged in that they are installed on application server hardware. Other converged systems (typically appliances) may consist of storage and networking in a commodity or proprietary box – for two layers. Converged and hyper-converged appliances and SDS systems, however, all typically aggregate pooled storage into one shared clustered system with distributed data/protection spread across appliance boxes or host servers/nodes. They tend to be storage device/hardware agnostic as well, supporting PCIe I/O and SSD flash media as well as traditional HDDs.

Appliance based solutions offer plug-n-play boxes with predictable scalability which can be quickly added for scale-out (more nodes) or scale-up (more storage capacity). Local area clusters can be created with data protection spread across multiple shared appliance boxes. Flash caching or storage pooling/tiering performance features enhance the overall user experience. Adding additional storage and compute resources is predictable in terms of incremental CAPEX outlays. There may be some constraints on scalability, performance, and elasticity capabilities but these restrictions may not be deal breakers for some use cases. ROBO, retail store outlets, SMBs and smaller datacenters, for example, come to mind where smaller capacity, defined hardware storage server appliances provide adequate converged resources. “Datacenter in a box” is often used by some appliance venders to position these smaller, geographically distributed deployments. It’s an apt sound byte. Other larger customers might simply add another box for more symmetrical storage, I/O performance, or compute. Either way, they get it all in a single unit.

hardorsoft2
In other cases, a pure software-defined storage solution or software-defined data center can be better. Why? Well again, use cases are a big driver. Optimal use cases for these commodity hardware based SDS systems include database/OLTP, Test/Dev, Virtualization and Big Data/Cloud computing, and where availability of existing commodity server resources for lower cost scalability is high. SDS systems like ScaleIO can be installed on commodity application servers and pool server DAS together to form a converged, aggregated shared storage asymmetric cluster pool with distributed data protection across participating servers. They do this while delivering huge performance (IOPS and bandwidth) and scale-out synergies realized from parallel I/O processing. In essence, a peer-to-peer grid or fabric is created from this software. Blogademically speaking, an SDS is analogous to an unconstrained system versus a contained appliance based solution. It goes without saying that both have their strong points.

Another aspect that comes into play is your tolerance for installation, integration, and deployment activities. Both hardware appliances and SDS systems have their strong and weak points in terms of the degree of expertise needed to get up and running. SDS systems can make that task set easier with their thin provisioned software module/installs, intuitive monitoring dashboard GUIs and/or English language based command interfaces. Thin provisioning available on appliances and some SDS systems offers more efficiency where the amount of resource used is much less than provisioned. This enables greater on-the-fly elasticity for adding and removing storage resources, creating snap shots as well as leveraging storage and resource costs in a virtual environment.

For some users, a commodity based converged or hyper-converged hardware appliance with internal hybrid storage (i.e., SSD Flash and HDDs) is the way to go. For others, it’s a pure SDS solution. Some customers and data center administrators favor a bundled, pre-packaged hardware appliance solution from vendors. They offer predictable resource expansion, performance and scalability as well as quick set-up and integration. A growing segment, however, prefers the ease of install: greater on-the-fly elasticity, lower TCO, hyper scale-out capacity/performance by simply adding more servers and DAS devices and reduced management overhead of an SDS solution. Thin provisioning allows space to be easily allocated to servers and on a just-enough and just-in-time basis for ease of scalability and elasticity.

In the end, the question of going hard or soft for converged or hyper-converged storage systems depends on your data I/O requirements, use cases, goals/objectives, existing resources and environment(s) and plans for future expansion, performance and flexibility. Both have utility.

What “Field of Dreams” Can Teach You About IT Projects and IT Operations Management

Would you think that the 1989 movie “Field of Dreams” has just as much to do with IT operations as it does with baseball. Remember the backstory?field_of_dreams_poster

The movie’s protagonist, Ray Kinsella, is an average and unsuccessful farmer, with regrets about his past. Like many on an IT team who gets struck with a bolt of inspiration – an idea for an IT-related project (a new application or service, probably along the lines of “what if we had a way to…” or “what if we did this:”) – Ray listens to a voice in the night telling him “If you build it, he will come.” He proceeds to plow under his crop, and construct a baseball field in the middle of his Iowa cornfield

Then Ray’s “project” develops its own momentum – the seemingly now-corporeal ghost of Chicago White Sox outfielder Shoeless Joe Jackson walks out of the cornfield bordering the newly built baseball field, admires everything, thanks Ray for what he’s done, and asks to come back – with “friends.”

Now Ray on an IT team would have had a similar experience: Someone gets wind he’s been working on a Skunk Works project that, although radical, could be something amazing. Shoeless Joe is like that first test user that becomes the unintentional evangelist, and quickly starts to build a critical mass among users.

At this stage in the IT project, things are going well: The user base has grown, the old guard (shown in the movie as 1960s anti-establishment author Thomas Mann) at first grudgingly agrees to take a look at the project, then likes what it sees, becomes a strong advocate, and things evolve quickly (maybe even moving to formal alpha testing). In the movie, Shoeless Joe has brought a throng of other now-corporeal ghosts to play baseball once again on Ray’s field (more users, all of whom love the work Ray’s done). And Ray’s wife Annie stands by her man, despite a wave of criticism coming from her brother Mark, the financial advisor and antagonist who absolutely cannot see or understand Ray’s vision, and what he’s done.

Mark personifies the non-IT finance person who absolutely cannot see any value in an IT project. The premise makes no sense. It’s not about economic optimization. Losses need to be cut and risk averted. (Sound familiar?) Even the testimonies of others can’t budge this person from his position. And without the ability to secure funding, that IT project isn’t ever going to go anywhere and become something big.

Getting past this immoveable object requires an “ah-ha” moment, where the clouds part and insight illuminates the closed-minded. For Mark, it was seeing the “project” in a context that mattered to him. (In the movie, Mark could finally see the ballplayers after one of them, Moonlight Graham, willingly chose to step off the safe environment of the field to help Ray’s daughter Karen, who had fallen from the bleachers.)

So now Ray’s IT project has its financing. It’s gone from a germ of an idea, been incubated, grown, overcome hurdles, proven, and spread among users. It’s been promoted. (Thomas confidently predicts that people will come.) In IT project terms, it’s ready to go live.

In the movie, that go-live moment is represented by Ray finally getting his reward for all his efforts: The newest player to arrive at this field is no other than his father, with whom, from a soliloquy early in the movie, Ray had significant, gnawing regret from their long, unresolved estrangement. The movie delivers its biggest tear-jerker moment when Ray, who, as a rebellious teen, steadfastly refused to throw a baseball around with his father, was now able to fix that by asking his father to have a catch.

And they did. And the credits to the movie rolled, showing a fade-to scene of miles and miles of cars driving toward Ray’s field. And you might think that would be the end of the IT analogies here as well. But it’s not.

Although the biggest IT lesson of all came from the end of the movie, it had nothing to do with Ray and his father playing catch. Just before that sequence, the most level-headed character in the movie, Ray’s wife Annie, simply and succinctly proved to be the voice of reason in one sentence: “If all these people are going to come, we’ve got a lot of work to do.”

Annie represents IT operations. She supported Ray’s idea (the IT project) from the beginning. (Many I’ve spoken to in IT operations say “Do we ever really have a choice of not supporting it?”) As IT operations does, she kept things running smoothly despite the chaos unfolding. And, as the voice of reason to any IT project, IT operations provides the view that starts to answer the question “What happens now that we’ve gone live?”

And Annie’s insight provides the biggest unstated IT lesson from the movie: Although the IT project and the way it unfolds is important (and makes for a good story, in this case), you can’t neglect the IT operations view of the world: Things need to happen after the go-live date to ensure IT service delivery environment (i.e., infrastructure) keeps running smoothly and performs as expected. And consider the long-term impact of what’s now changed in the environment. In IT terms, the project’s success has created a performance bottleneck in the environment (the traffic jam of people trying to get to the field). And that’s just for starters: Where will all these people eat, sleep, and bathe? And is there a wi-fi hotspot nearby?

Good IT projects can be a tremendous experience. They can overcome obstacles to create new value, change things for the better, and get people to see things in a whole new light. But a key takeaway to keep in mind is that they, like a movie, tend to have a story arc: a perceived beginning, middle, and end.

And it’s really that “end” that needs to be thought of as the beginning – the start of the usage lifecycle of that application or service. That’s when you have to address everything that needs to be done, from an IT operations monitoring and management perspective, to keep that new application or service available and performing the way it should to meet (or exceed) user expectations. And that’s exactly what EMC Service Assurance Suite and EMC ViPR SRM do – provide IT operations teams with the insights necessary to ensure that IT service delivery environment functions they way that they should, as well as the ability to easily absorb changes in the IT environment.

If I were producing this as a short movie, I’d now call out “fade to black, cut, and roll credits.” But IT operations would still keep a light on, behind the scenes, to help keep an eye on things.

Service Assurance for NFV

Serge Marokhovsky

Serge Marokhovsky

Serge Marokhovsky

Latest posts by Serge Marokhovsky (see all)

There is a growing interest across the telecom industry today with Network Functions Virtualization (NFV). NFV is being evaluated in labs across the world and piloted for production rollout. In particular, larger service providers are steering their strategies around NFV to get in front of the pack and give themselves the agility they lack against the smaller and more aggressive players. A second key driver for NFV is to reduce costs while maintaining carrier class service assurance. With the fast arrival of NFV, are the current OSS & BSS tools in use today going to work in the future when NFV goes mainstream? If so, will they give the operational benefits management is anticipating to achieve with NFV or just keep doing the same thing? Let’s explore the challenges and new opportunities to achieve with service assurance in an NFV environment.The main business benefits of NFV are as follows:

  1. CAPEX: Service Providers will be purchasing commodity hardware based on x86 architecture running Linux and hypervisors bringing the benefits of virtualization; instead of running network functions on expensive proprietary hardware.
  2. OPEX: operational costs will provide the bulk of cost savings with greater automation. When service orchestration, automated root cause analytics, service impact analytics, and automated service remediation replace the current manual processes of many service providers; then NFV will become effective. Orchestration is required for full service management from provisioning of resources to de-activation and service changes to meet customer changing requirements over the life of a service. Next a real-time automated root cause analysis tool will reduce the burden most operations staff has to identify faults or network congestions. Operations staff is overwhelmed with the millions of daily events unable to identify faults in the network. Furthermore, the complexity of networks makes it impossible to know the service impact of such faults. Then the 3rd element for NFV operations requires to automate remediation of services through new software tools that get invoked when root causes are identified, service impact is understood and remediation steps are executed based on the knowledge of the environment.
  3. Competitiveness: it’s obvious with lower operational costs. More competitive prices can be offered but that’s not all. The big competitive advantage comes from the fact service providers will be able to activate services faster than today (seconds instead of days). Self-service portals will be given to customers to choose services, expand or contract them as needed on the fly, automatically without the intervention of personnel. More efficient use of hardware resource to provide users greater performance and service providers will have the flexibility to move network functions anywhere in the network; edge, core, backhaul… by virtue of virtualization.
  4. Accurate metering and billing: in a world where everything is virtualized, it becomes difficult to know how much of the actual physical resources are used. Accurate metering of service usage results in better pricing and gives service providers the opportunity for greater profits.

Now let’s look into the NFV architecture required for Service Assurance to higher levels.

NFV-Service_Assurance_Blog

Service Assurance Component
This component is the brain of the overall system performing complex analytics to ensure SLAs are met and efficiencies are at their highest. It is designed to monitor the infrastructure for faults, congestion, anomalies, and service impacts. Then problems are automatically root caused, service impact is assessed in real-time, and remediation is automatically initiated to counter impact to end users. When the root cause is equipment failure or insufficient hardware resources, a ticket is automatically created to engage the support staff to replace/repair the defective hardware component or initiate the request to add hardware resources.

The operational value is to ensure the highest possible service levels with minimal costs.

Inventory & Topology Component
This next component is at the heart of system mapping all the elements of the infrastructure to allow orchestration, service assurance and network functions to do their job with accuracy. This component first maintains a detailed and accurate listing of the infrastructure, applications and service catalog. A thorough topology is maintained mapping the relationship of the various network elements, activated services, physical/virtual network relationships in real-time.

Its operational value is to provide accurate and detailed information about the service provider systems without human intervention to maintain a correct listing.

Service Orchestration
This is not a fully automated component because it requires human intervention to request new services, approve service changes and other input functions to serve end-users and administrators. However, there is a great deal of automation to provision services with great turnaround time (seconds instead of days) and programmatically interact with the network infrastructure with no human intervention.

EMC’s Service Assurance Answers to NFV
EMC is well positioned with its current Service Assurance Suite of products to offer automated root cause analysis, discovery and topology service, impact analysis and a rich API to allow external systems to integrate and customize solution to the specific business needs.

Furthermore, EMC is leading the investment into Openstack to drive service orchestration and interface uniformly with other parts of the infrastructure to drive automation. EMC is also investing into high performance analytics to offer real-time results to automatically remediate problems before SLAs and user experience is affected.

Finally the journey for service providers toward NFV will transform them to build IT organizations similar to today’s enterprise IT organizations. With x86 platform at the core of NFV values, the transformation of network functions from proprietary hardware to commodity x86 hardware running in large datacenters will require service providers to rely on vendors such as EMC with expertise with data center management.

There is a lot of activity happening around NFV and EMC is leading the charge, to learn more, the TMForum Digital Disruption event Dec 8-11th in San Jose CA is the place to be and EMC will have a presence at this year’s event.   If you can’t make it to the event, there are other opportunities to learn more, download this analyst brief on Service Assurance or join our next webcast on Service Assurance on December 11th by following this link:  http://bit.ly/1EuMSGs

Follow Dell EMC

Categories

Archives

Connect with us on Twitter