Data Migrations: Seven10 and EMC Revolutionize the Industry Paradigm

Bobby Moulton

Bobby Moulton

President & CEO of Seven10 Storage Software

*The following is a guest blog post by Bobby Moulton, President & CEO of Seven10 Storage Software, a leading developer of cloud-based information lifecycle management (ILM) and data migration software.

An assertive headline is essential for a bold undertakingMigrate to cloud that forever changes how data is moved from old storage to new.  A few years ago, Seven10 set out to transform how users, vendors, and application providers consider file and storage migrations.  It started with a customer challenge: move critical data off proprietary hardware over to new storage without interrupting the patient care process – and resulted in the Storfirst simple, trusted,  data migration platform.

Seven10 searched the industry and was surprised at the lack of innovation.  Where was the automation?  Where was the vision?  Where was the hands-off, ‘we make it so easy you can do it yourself’ innovation?  It seemed that some were busy developing next-generation SaaS, Big Data, IoT, or cloud based offerings because they weren’t working on a data migration solution.

Seven10 Storfirst was Born.
So Seven10 stepped up to the plate.  We focused on customer-driven migrations that were highly automated, supremely reliable, and ridiculously cost effective.  We tossed the PS-lead blue-print and created a new 100% software-driven model.

From day one, Seven10’s Storfirst software seamlessly transitions data from the widest range of legacy storage environments, including EMC Centera, NetApp StorageGrid, HP MAS, IBM GMAS, Oracle SAMFS – as well as any existing NAS platform, cloud gateway or file system.  In addition to data migration capabilities, Storfirst is the only solution offering a standard SMB/CIFS or NFS presentation layer for immediate access into EMC platforms such as ECS.

Why Migrate Data to EMC ECS?
EMC’s Elastic Cloud Storage (ECS) software-defined cloud storage platform combines the cost advantages of commodity infrastructure with the reliability, availability and serviceability of traditional storage arrays.  ECS delivers protocol support for Object and HDFS – all within a single storage platform. Seven10’s Storfirst Gateway allows EMC customers to quickly decommission legacy storage devices while simultaneously modernizing their infrastructure with the adoption of ECS.

How Seven10 Storfirst Gateway Works:
Seven10 offers migration PLUS go-forward data management – all without breaking the bank or interrupting day-to-day operations.  Seven10 changed the paradigm from a resource intensive, PS-led effort, to a repeatable, software-driven five-step migration process:

1. Inventory - Storfirst “ingests” existing file system as read-only and configures new storage or storage tiers under a single managed share.

2. Sync – While providing uninterrupted access to legacy data and writing to new storage, Storfirst copies all legacy data onto new storage or storage tiers.

3. Verify – Using an MD5 hashing algorithm for data verification, Storfirst delivers zero risk of data loss migration.

4. Audit – Storfirst provides a detailed logging capability in order to conduct a file-by-file comparison on the new storage to ensure all data has been copied without any loss.

5. Decommission – Once the migration is complete, the application communicates to the new storage platform while the legacy storage is decommissioned and removed from the environment.

Thanks to the longstanding Technology Connect Select Partnership of EMC and Seven10, organizations retire and/or refresh their storage architecture with software-driven, secure data migrations that guarantee zero data loss.  Storfirst meets compliance regulations by enforcing policies such as: encryption, file-locking, lifespan, auditing, and tiering.  Industries from healthcare to financial services and from manufacturing to government now have the answer to the data migration challenge.

Seven10 and EMC Ease Customer Stress with Safe, Trusted, Proven Migration Solutions

For Allegiance Health, Storfirst seamlessly migrates critical files to ECS.  Due to long-term reliability concerns with the existing NetApp StorageGRID, Allegiance selected Storfirst to migrate millions of electronic records off NetApp and over to ECS.  This all-in-one solution includes optimum storage with a built-in migration path and a 100% auditable transition to ECS – all while delivering Allegiance uninterrupted access to their patient files.

“The combined offering from EMC and Seven10 provides Allegiance Health with an easy and safe migration solution for moving and protecting our critical patient data.  Seven10’s Storfirst migration and management software is very robust, allowing us to quickly and easily adopt the EMC cloud storage platform,” said Allegiance Health’s Information Systems Vice President and Chief Information Officer Aaron Wootton.

It’s clear, the question is not if companies migrate their data, but rather how they complete the migration.  Understanding the features, advantages and benefits of the options is essential.  Through a well-defined, proven, best-of-breed technology partnership with real-world applications, Seven10 and EMC redefine the industry paradigm.

Sizing up Software Defined Storage

Rodger Burkley

Rodger Burkley

Principal Product Marketing Manager

By now, you’ve all heard how Software Defined Storage (SDS) is reshaping the storage industry (and use case) landscape. The market gets it and increasingly users are embracing this new, “disruptive” technology by introducing it to their enterprise data centers or using it to create hyper-scalable virtualized infrastructures for cloud applications. After all, the appeal of installing software on individual commodity host application servers to create a virtual storage pool (i.e., “server SAN”) from each participating server’s excess direct attached storage (DAS) without requiring additional specialized storage/fabric hardware is alluring…and almost too good to be true.

Throw in the added synergy and side benefits like ‘on-the-fly’ elasticity; linear I/O processing performance capacity scalability; simplicity and ease of use; hardware/vendor agnosticism and unparalleled storage platform flexibility and you might think “SDS server SAN” technology can solve world hunger too. Well, if not world hunger, perhaps today’s contemporary data centers’ thirst for a simpler, less expensive higher performing and more flexible storage solution for block and/or object….

Yes. There’s a lot of excitement, market activity and hype out there around SDS and Server SANs in general. But don’t take my word for it. Though data compilations for CY 2014 aren’t available yet, Wikibon’s market TAM and SAM sizings for 2013 are revealing.

Figure 1. Hyperscale vs Enterprise
Hyperscale vs Enterprise

Figure 2.  Vendor SOM (share of market)
Vendor

Figure one shows that (at least for 2013) the application of “Hyperscale” Server SAN’s (i.e., Petabyte Scale) generated far greater revenue than “Enterprise” Server Server SAN’s.  Why?  New VSI (Virtual Server Infrastructure) and Cloud use cases are ideally suited for the hyper-converged, hyper-scalability attributes SDS Server SAN’s bring to the table.  This is, in fact, a primary targeted use case for ScaleIO.  Why are Enterprise Server SAN’s a lot less than Hyperscale Server SANs?  This is the domain of the mission critical apps, databases and use cases that keeps IT Datacenter Directors and Administrations busy….and up at night.  It’s also the domain of traditional storage arrays and the Storage Admins, with lots of proprietary equipment (and bias) for vendors like EMC and our esteemed competitors.   These folks are wary and cautious when it comes to new technologies.  They’re not enthusiastic early adopters.   But as new technology and products mature and prove themselves, they end up being embraced by IT data center departments.  “Show me your value prop” or …”Show me the money.”  And simple, less complex storage solutions will get you in the door.  Growth is expected to be high in this hardware ‘open’ and ‘liberated’ segment over time.

Figure 2 shows the major players in the Server SAN arena.  Note that VMware’s SOM is tiny…but this is because VMware’s Virtual SAN (VSAN) wasn’t fully rolled out in the market place.   But also note the number of players.  Big players and small unknowns alike.  All vying for the coveted Enterprise Server SAN market, which is poised for growth along with the SMB and ROBO segments.

By now, you’re ready to call me out on my liberal use of SDS and Server SANs terminology.  After all, figure 2 lists hyper-converged Server SAN hardware appliance vendors (like Nutanix & Simplivity) along with pure software SDS vendors and products (like Scality, ScaleIO, Scale Computing, etc.).  So what gives?  Continue reading

Hadoop is Ready for Primetime: Recap of Strata + Hadoop World San Jose

Ryan Peterson

Ryan Peterson

Chief Solutions Strategist

Hadoop joins the ranks of Microsoft Windows and Apple iPhone as the next platform ready for applications.  The message is clear from Strata + Hadoop World San Jose 2015 that Hadoop is ready for primetime.  As we have all seen in the past from other successful platforms such as Windows and iPhone, it takes a well-constructed operating system and application development framework to prepare for success.  Windows 1.0 was a great glimpse into what would happen when the 2nd platform originally emerged, but it wasn’t successful until applications began to be created.  I remember playing with Windows 1.0 and thinking, I wish it had the ability to do X, and I wish it had Y.  And of course today, it has most any application you might need.  The same holds true with the advent of the mobile era as iPhone 1.0 built a platform with a handful of applications, but it wasn’t truly successful until the ecosystem began to build apps on top of the platform.

Enter the next generation data platform, Hadoop.  We’ve heard our customers say things over the last three years like “we’re experimenting with Hadoop” or “we have it in a lab” or “we have a few killer apps we’ve custom designed”.  But in 2015, we’re discovering trends in data and trends in data use by using the advanced toolsets the Hadoop framework brings to data.  In the financial industry, for example, we see fraud analytics and risk calculations as a common set of applications being built with the technology.  It’s now only a matter of time until an application is established that solves that challenge with fewer customizations than Hadoop has usually been known to require.

You can see Doug Cutting (The Father of Hadoop) and me speaking about this topic on O’Reilly TV:

The industry is full of change, advancement, and growth.  You could see growth in the form of attendees from last year (people are starting to get it).  You could see advancement from all of the new intellectual property brought out by the vendors (including some EMC competitors).  Good to see them joining the party.  And for change, well that was the story of the week with the announcement of the Open Data Platform.  There has been plenty said about the new Pivotal-led initiative both from supporters and adversaries.  Although I have heard a lot about the initiative this week, I’d say I am not qualified to comment on its merits.  I will instead state my opinions, which I’m known to do.  I believe in Big Data as something that will change the world.  I also believe Hadoop as a framework is still in need of an enterprise quality uplift as we transition to the application-ready nature I’ve just addressed.  I hope the ODP will be an organization that will not only provide that uplift, but will do so in a truly open way and in a way that gets all of the major Hadoop supporters on board.

At EMC, we support the industry, our customers, and we want to see the world truly made better through whichever vendor that customer chooses (we are Data Switzerland).  We hope we are delivering excellent products and solutions to that end, and believe customer choice is at the heart of those solutions.  With that in mind, we’ve augmented our Pivotal and Cloudera relationships to include Hortonworks.  After 6,172 tests required for certification of EMC Isilon against the Hortonworks distribution, I am happy to say Isilon has passed with just a handful of documented differences. This should put customers at ease when they decide to utilize Hortonworks HDP with our Data Lakes.

Shaun Connelly, VP of Strategy for Hortonworks and I discussed the certification on theCube:

We announced the HD400 node, which is fantastic!  I have found that not many companies have moved greater than 20PB into Hadoop.  Even the very large Web2.0 companies run multiple clusters none of which I have seen greater than 35PB.  This is usually a result of maxing out the namenode and is seconded by not wanting to have such a large fault domain.  I believe EMC Isilon’s 50 PB’s be PLENTY of capacity for 99.9999% of companies for many years to come.

See Sam Grocott discuss the Data Lake and our recent announcements related to the HD400 on theCube:

Finally, a big shout out to Raeanne Marks who represented EMC at the Women of Big Data conference this week, Bill Schmarzo (Dean of Big Data) and the army of >50 EMC’ers that have joined the Hadoop revolution and made it to Strata + Haodop World San Jose this year.

We are driving many innovations with the products, solutions and choices for our customers, follow @SGrocott, @NKirsch, @KorbusKarl, @AshvinNa, @EMCBigData and @EMCIsilon to bring you the latest stories from the trenches.

Thank you!

Ryan
@BigDataRyan

 

All roads lead to … Hyperconvergence

Mark O'Connell

Mark O'Connell

EMC Distinguished Engineer & Data Services Architect

There are a series of trends which have determined the overall direction of the IT industry over the past few decades.  By understanding these trends and projecting their continued effect on the data center, applications, software, and users, it is possible to capitalize on the overall direction of the industry and to make intelligent decisions about where IT dollars should be invested or spent.

This blog looks at the trends of increasing CPU power, memory size, and demands on storage scale, resiliency, and efficiency, and examines how the logical outcome of these trends are the hyperconverged architectures which are now emerging and which will come to dominate the industry.

The storage industry is born
Storage 2In the 90s, computer environments started to specialize with the emergence of storage arrays such as CLARiiON and Symmetrix. This was driven by the demand for storage resiliency, as applications needed data availability levels beyond that offered by a single disk drive. As CPU power remained a constraining factor, moving the storage off the main application computer freed up computing power for more complex protection mechanisms, such as RAID 5, and meant that more specialized components could be integrated to enable features such as hot-pull and replace of drives, as well as specialized HW components to optimize compute intensive RAID operations.

Throughout the 90s and into the 2000s, as storage, networking, and computing capabilities continued to increase, there were a series of treadmill improvements in storage, including richer replication capabilities, dual-disk failure tolerant storage schemes, faster recovery times in case of an outage, and the like. Increasingly these features were implemented purely in software, as there was sufficient CPU capacity for these more advanced algorithms, and software features were typically quicker to market, easier to upgrade, and more easily fixed in the field.

A quantum leap forwardLeap 3
The next architectural advance in storage technologies came in the early 2000s with the rise of scale-out storage systems. In a scale-out system, rather than rely on a small number of high-performance, expensive components, the system is composed of many lower end, cheaper components, all of which cooperate in a distributed fashion to provide storage services to applications. For the vast majority of applications, even these lower end components are more than sufficient to satisfy the application’s needs, and load from multiple applications can be distributed across the scaled out elements, allowing a broader, more diverse application load than a traditional array can support . As there may be 100 or more such components clustered together, the overall system can be driven at 80-90% of maximum load and still be able to deliver consistent application throughput despite the failure of multiple internal components, as the failure of any individual component has only a small effect on the overall system capability. The benefits and validity of the scale-out approach was first demonstrated with object systems, with scale-out NAS and scale-out block offerings following shortly thereafter.
Continue reading

Icy Hot: Cold Storage is a Hot Market and Object Storage is Heating Up

George Hamilton

George Hamilton

Sr. Product Marketing Manager

How do we measure the mission criticality of storage systems? What comes to mind when you hear or read the words, “mission critical”? Certainly, you’d think of reliability,Icy Hot resiliency, data protection, etc. But I’m willing to bet that you also, almost reflexively, think of performance –measured in millions of IOPS, transactions per second, or sub-millisecond latencies. To many, mission critical means fast. Think all flash arrays and high-end block storage. This is what the industry refers to as “Hot” storage.

“Cold” storage, on the other hand, gets no love.  When you think cold storage, you think of old data you don’t want but can’t get rid of. You think of tapes in caves or a $0.01 per GB/month cloud storage service. Think low cost, commodity and object storage. Cold storage has an image problem, thanks in no small part to Amazon Web Services introducing Glacier in 2011 as a cold archiving service. You don’t often hear the terms “mission critical” and “cold storage” in the same sentence (see what I did there?). You think cold storage isn’t important. And you’d be wrong.

You’d be wrong because the world of storage doesn’t bifurcate so neatly into just two storage categories. Cold storage, which is frequently delivered by an object storage platform, can actually be different temperatures – cool, chilled, cold, colder than cold, deep freeze, etc. Confused? IDC explains:
System Type
Source: IDC Worldwide Cold Storage Ecosystem Taxonomy, 2014 #246732

It all depends on the use case and how active the data is. Extreme or deep freeze archive is when the data is seldom, if ever, accessed. Amazon Glacier is an example. Access times can range from hours to more than a week depending on the service – and you pay for the retrieval. Deep archive makes up the bulk of the cold storage market. The data is also infrequently accessed but it remains online and accessible. IDC cites Facebook Open Vault as an example. Active archive is best for applications that may not modify data frequently, if at all, but can read data more frequently as in Write Once, Read Many (WORM). An example use case is email or file archiving; IDC cites EMC Centera as an example. EMC Atmos and EMC Isilon are also good examples.

Object storage, general speaking, falls under the category of cold storage and is used for any temperature. But it should not be pigeonholed as an inactive, unimportant storage tier. Object storage is a critical storage tier in its own right and directly influences the judicious use of more expensive hot storage. With the explosion in the growth of unstructured content driven by Cloud, mobile and big data applications, cold secondary storage is a new primary storage. To the salesperson or insurance adjuster in a remote location on a mobile device, the object storage system that houses the data they need is certainly critical to their mission.

The importance of cold storage is best explained in the context of use cases. The EMC ECS appliance is a scale-out object storage platform that integrates commodity off-the-shelf (COTS) components with a patent-pending unstructured storage engine. The ECS Appliance is an enterprise-class alternative to open source object software and DIY COTS. ECS offers all the benefits of low cost commodity but saves the operational and support headache of racking and stacking gear and building a system that can scale to petabytes or exabytes and hundreds or thousands of apps. Organizations evaluating ECS appliance are generally pursuing a scale-out cloud storage platform for one or more of the following three use cases:

Global Content Repository

This is often an organization’s first strategic bet on object and cloud storage.  Object storage, due to its efficiency and linear scalability, makes an ideal low cost utility storage tier when paired with COTS components. The ECS appliance delivers the cost profile of commodity storage and features an unstructured storage engine that maintains global access to content at a lower storage overhead than open source or competing object platforms. This lowers cost and makes their hot storage more efficient and cost- effective by moving colder data to their object archive – without diminishing data access. But it’s more than that. A crucial aspect of a global content repository is that it acts as an active archive; the content is stored efficiently but is also always accessible – often globally.  And it’s accessible via standard object storage APIs. Consequently, the global content repository also supports additional uses such as next-generation file services like content publishing and sharing and enterprise file sync and share. And there is an ecosystem of ISV partners that build cloud gateways/connectors for the ECS appliance that extend the use case further.

Geo-scale Big Data Analytics

Geo-scale Big Data Analytics is how EMC refers to the additional use of a Global Content Repository for Big Data Analytics. The ECS Appliance features an HDFS data service that allows an organization to extend their existing analytics capabilities to their global content repository. As an example, one ECS customer uses their existing Hadoop implementation to perform metadata querying of a very large archive. ECS appliance treats HDFS as an API head on the object storage engine. A drop-in client in the compute nodes of an existing Hadoop implementation lets organizations point their MapReduce tasks to their global archive – without having to move or transform the data. The ECS appliance can also be the data lake storage foundation for EMC Federation Big Data solution. This can extend analytics scenarios to include Pig, Hive, etc. In addition, since ECS is a complete cloud storage platform with multi-tenancy, metering and self-service access, organization can deliver active archive analytics or their data lake foundation as a multi-tenant cloud service.

The ECS appliance overcomes some of the limitations of traditional HDFS. ECS handles the ingestion and efficient storage of a high volume of small files, high availability/disaster recovery is built in, and distributed erasure coding provides lower storage overhead than the 3 copies of data required by traditional HDFS.

Modern Applications

Mainstream enterprises are discovering what Web-centric organizations have known for years. Object storage is the platform of choice to host modern, REST-based cloud, mobile and Big Data applications. In addition to being a very efficient platform, the semantics of object make it the best fit for Web, mobile and cloud applications.

I recommend viewing the webcast, “How REST & Object Storage Make Next Generation Application Development Simple” to get an in-depth look at object architecture and writing apps to REST based APIs. However, there are two features unique to ECS that facilitate the development and deployment of modern applications:

  • Broad API support. ECS supports Amazon S3, OpenStack Swift or EMC Atmos object storage APIs. If developing apps for Hadoop, ECS provides HDFS access.
  • Active-active, read/write architecture – ECS features a global index that enables applications to write to and read from any site in the infrastructure. ECS offers stronger consistency semantics than typically found in eventually consistent object storage. ECS ensures it retrieves the most recent copy of a file. This helps developers who previously had to contend with the possibility of a stale read or write conflict resolution code into their applications.

Noam Chomsky once said, “I like the cold weather. It means you get work done.” You can say the same for cold storage; it also means you get work done.  It’s become a workhorse storage platform. It doesn’t get the sexy headlines in trade rags. But I hope after reading this and understanding the actual use cases for ECS appliance and object storage, you have a better appreciation and some love for cold storage. There are lots of solutions for storing old data that just can’t be thrown away and most compete purely on price. But, if your applications and data fall into one or more of these use cases, then the ECS appliance should be at the top of your list.

How are you planning to meet the OTT Video Demand?

Jeff Grassinger

Jeff Grassinger

Sales Alliance Manager

There is a great deal of buzz in the Media industry about digital content delivery. At the recent CES trade show in Las Vegas, Dish Networks announced Sling TV. Sling TV will deliver some of most popular live cable channels “over-the-top” (OTT) of the Internet to consumers. This comes on the heels of Time Warner’s wave of industry press following their decision (which must have been a challenging one) to outsource video delivery and move away from an in-house build for their HBO GO OTT offering. Clearly there is a lot of effort being put into new video delivery platforms. Efficient business models and optimized infrastructure are still being assessed. EMC Isilon speaks with many media organizations that are weighing the same decisions on how to leverage their media assets and capitalize on the accelerated consumer demand for online video.

Demand Driving Decisions
It’s no secret that that there is a voracious and rapid growth in consumer demand for online video. With delivery platforms like smartphones, tablets, set top boxes, gaming devices and connected TVs enabling convenient access to online video, consumers can access content wherever they have internet broadband or mobile data access—and they’re willing to pay for the privilege. According to PricewaterhouseCoopers (PwC), OTT video streaming will grow to be a $10.1 billion business by 2018, up from just a $3.3 billion in 2013.
Taking into account the shifting in consumer demand, the rapidly growing associated market share opportunity, and advertising/subscription revenue, it’s clear the demand for online video is a strategic, fast moving, and meaningful business opportunity for the media industry. So how do organizations take the next steps to delivering online video?

Evolution of Media Delivery
As decision makers become increasingly focused on the strategic business value that online video provides, executing on this evolution of media delivery may not be easy for some organizations. Success in the decision to compete in the OTT markets often boils down to three options: build, buy, or a hybrid of both. Regardless of the choice, this decision has major implications for your organization. In essence, the majority of media organizations will be creating a hybrid infrastructure – as few will own their own CDN or the “last mile.” However, there are a few aspects and specific requirements you would want to consider for your deployment and I reached out to one of our Isilon CTO’s Charles Sevior to share his overview of the infrastructure models:

  • Build — For existing media organizations, you’re probably already considering requirements like integration with existing content playout infrastructure, digital rights management and advanced monetization such as targeted ad serving and VOD subscription models. Leveraging your existing content assets, infrastructure and technology team to create a new OTT workflow can result in lower deployment costs and an efficient long-term solution. Your strategy to layering OTT video delivery on top of your regular playout enables your team to incrementally add the new workflow to your content delivery ecosystem. And you can learn the benefits of integrating advanced analytics technologies like Hadoop to extract valuable business insights and provide content recommendations for improved viewer engagement using an integrated Isilon Data Lake Foundation.
  • Buy — Aggregating content rights in your territory for the specific delivery mode is only the start. Setting up an operational infrastructure for reliable and “buffering free” media delivery is a large part of the equation for streaming success. For some businesses outsourcing the OTT video delivery infrastructure may be the best strategy. Development and operation of media infrastructure may not be one of your core business competencies, or time to market presents a need to launch today to get ahead of the competitors.
    Outsourcing has immediate benefits: speed to market is greatly increased; you have significant platform agility to dial in your business model; and the barrier to entry from a technical standpoint is low. Finally, your financial outlay is an operational expense. If the venture proves commercially non-viable, you can more readily shift strategies down the track.
    Choosing the right outsource partner becomes critical and an experienced media content delivery specialist can quickly accelerate your speed to market and help you navigate the challenges for your go-to-market. 
  • Hybrid — In reality, the best infrastructure for online video may be a hybrid model. With a hybrid model, you can leverage your current resources and talents against your “cash cow” business operations, while outsourcing parts of the video delivery infrastructure that have low revenue return or tight launch windows. A hybrid model gives your business the agility of rapid deployment with the flexibility to bring the workload back onto owned and managed infrastructure for reduced cost overheads and leveraging investments in staff, infrastructure and data centers.
    The EMC Isilon scale-out NAS has helped a lot of media organizations deliver content. In fact today, we are providing the origin storage solution to serve audio and video content to just under 2 billion subscribers worldwide in the cable, satellite, IPTV, OTT and streaming music industries.

EMC has a unique relationship with companies that have built an industry-leading infrastructure to deliver video to their customers worldwide. One of those companies leading the way is Kaltura, one of the top Online Video Platforms (OVP). They offer services and infrastructure to help you outsource, build your own (using their open source APIs), or develop hybrid content delivery solutions. Kaltura is not only helping media organizations, but also companies in education, enterprise and government sectors. Here is a short video that we created with Kaltura about their operations and infrastructure decisions:

As you consider your next step in video delivery, let us or Kaltura know how we can assist in your planning process. If you liked this post and video, please feel free to like, share, and tweet.

One-Stop Shop for Everything Emerging Tech!

Suresh Sathyamurthy

Suresh Sathyamurthy

Sr. Director, Product Marketing & Communications

By 2020 the Digital Universe will be over 44 trillion gigabytes. How can all of this data help make your organization more competitive?

If you are looking for answers to questions like these, then you have come to the right place. Welcome to new EMC Emerging Technologies Blog.

The Emerging Technologies Division is a newly minted division within EMC with a focus on helping address customer needs amid a rapidly evolving IT landscape. Enterprise IT is increasingly being defined by software and influenced by trends such as Cloud, Big Data and Mobile. Customers are faced with the challenges and opportunities associated with these new and emerging trends – and we are here to help. Here is a brief overview of how the 15+ products and solutions within the EMC Emerging Technologies Division come together to help our customers.

Managing Data GrowthAccording to IDC, the total storage capacity shipped by 2017 will be 133EB. Which means data you have in your infrastructure today will double next year, and the year after and so on. Over 80% of this is estimated to be unstructured data. The key to managing this rapid proliferation of data is scale-out technologies. EMC’s Emerging Technologies Division will provide scale-out architectures across all data types – block, file and object to help you scale and manage your data as it grows.

Gain insights from dataManaging your data alone does not deliver the business value or competitive advantage that customers need today. The key to gaining insights from data is technologies like Hadoop. HDFS enabled shared storage infrastructures and elastic converged technologies are foundational components in building data lakes The EMC Emerging Technologies Division will provide these foundations – with choices ranging from high capacity geo-scale analytics to high performance real-time analytics

Manage CostsUsers in the virtual, on-demand world expect instant access to data and applications, forcing IT to rethink the way they manage and deliver storage. Software-defined approaches come with agility and impressive cost benefits. The Emerging Techologies Division has an impressive list of software-defined products that help reduce time to provision by an average of 63%, delivers over 60% in TCO savings, and provides hyperscale storage platforms that are about 9-28% less expensive than public clouds.

Cloud Strategy2015 will be the year where hybrid cloud will become the most dominant IT strategy. Building a hybrid cloud can be hard. EMC has made it easy with EMC Enterprise Hybrid Cloud Solution that delivers agility, choice and simplicity. Emerging Technologies division delivers software-defined scale out object platforms as well as API’s to seamlessly integrate with public clouds.

 

Real Time PerformanceAs mega trends around big data intensify, there is not just pressure on the customers to store unprecedented amounts of data but there is also pressure to analyze faster and keep costs in check. The Emerging Technologies Division will deliver new rack scale flash storage architecture designed to deliver game changing performance for next generation applications including real-time analytics and in-memory databases.

 

Consumption ChoicesAnd finally, the Emerging Technologies Division will deliver various consumption models to customers. Customers can choose to buy software and hardware integrated appliances, software-defined offerings, converged infrastructure platforms as well as cloud based “pay-as-you-go” models.

 

That’s a bit about what we are up to here at EMC in the Emerging Technologies Division. Bookmark this blog and learn more about the Emerging Technologies Division and what new trends are happening in the industry and from product thought leaders.

Everything You Need to Know About EMC’s Elastic Cloud Storage Solution & More!

Jamie Doherty

Jamie Doherty

You may have heard of ECS Appliance – EMC’s turnkey, software-defined cloud storage platform.  This was the big announcement from EMC’s Advanced Software team at EMC World 2014.  If you were at that event and visited the Advanced Software booth you had the opportunity to visit what we referred to as “the ECS Petting Zoo.”  This petting zoo was an opportunity to pull out the guts of the ECS Appliance and get a 1×1 walkthrough of the inner workings with the team that built it.  It was such a popular exhibit at the event, even our ownJoe Tucci, Chairman and CEO, and David Goulden, CEO of Information Infrastructure, stopped by for a personal show and tell.

WP_20140505_016WP_20140505_020-1WP_20140505_025-1

You might be saying the memories of the bells and whistles at EMC World are nice but what does that mean to me now almost a year later?  It means you have the opportunity to redefine cloud economics with ECS Appliance, which by the way, is powered by EMC’s ViPR Software-defined Storage solution.  ECS Appliance combines the cost advantages of commodity infrastructure with the reliability, availability and serviceability of traditional arrays to deliver hyperscale cloud economics in your data center.  ECS Appliance allows you to:

  • Extend the benefits of public and private clouds to any size business
  • Accelerate development for enterprise and software developers
  • Deliver competitive cloud storage services at scale for cloud providers
  • Accelerate Big Data initiatives for data scientists

I could ramble on and on about the benefits of EMC’s ECS Appliance.  I think it would be better to show you how it works.  This video gives you an overview of the ECS Appliance and introduces you to the benefits.


Now that you have had an opportunity to understand how the ECS Appliance can benefit your data center, I want to give you the same opportunity those had at EMC World.  Erik Riedel, Senior Director of Hardware and Platform Engineering at EMC, takes a deep dive in to the inner workings of the ECS Appliance in this video below.  The ECS Appliance architecture he is about to walk you through is a 3 petabyte rack that can be combined with 10 to 100 racks simultaneously in order to build an exabyte scale ready, cloud based storage environment.

With ECS Appliance you can deploy a hyper scale storage infrastructure that will give you everything you need to take on the 3rd Platform with confidence. Have more questions?  Visit the community and chat with one of our ECS experts.

EMC Hybrid Cloud for SAP: redefining simplicity with intelligent KPIs monitoring with ViPR SRM

Tim Nguyen

Tim Nguyen

Back in October 2014, I attended the SAP TechEd && d-code conference in Las Vegas, where SAP boldly talked about the “next steps to deliver innovation in the cloud with the SAP HANA platform” so that customers can truly innovate and simplify their business and development process, and SAP reaffirmed that running SAP HANA in TDI mode continues to gain momentum everywhere as lots of customers are wanting to hear.

During SAP TechEd && d-code Las Vegas, EMC Global Solutions Marketing launched theEMC Hybrid Cloud for SAP Technical Demo Video to supplement the white paper released at VMworld in San Francisco in August (check out my blog post from that event).  This 8-minutes video explains in simple terms how the EMC Hybrid Cloud for SAP (EHC for SAP for short) can be the bridge to the future to enable IT transformation while helping customers redefine simplicity, choice, & agility in deploying SAP landscapes in either on-premises cloud, off-premises cloud, or both.

When people discuss and debate the merits of implementing virtualized SAP environment, in the cloud so to speak, the conversation is often centered around the ease and simplicity of provisioning, for example, a new SAP sandbox or test environment can be stood up in minutes instead of weeks.  But running SAP in the cloud, regardless of it is on-premises, off-premises, or both in a hybrid cloud fashion, provide benefits that go far beyond provisioning!  In fact, the powerful capabilities offered in the areas of monitoring, workload relocation, and multi-tenancy chargeback will soon be the more interesting points to consider and understand!
Many customers and experts agreed that performance monitoring, alerting, and compliance reporting are afterthoughts, often put in place after some sort of crisis has occurred which caused an outage or a disruption to the business.  Since SAP is such a mission-critical system, you must be able to have end-to-end monitoring of all KPIs (key performance indicators) involving the compute, network, and storage tiers on the same pane of glass in order to quickly react to any issues.  EHC for SAP incorporates key monitoring tools offering unparalleled monitoring capabilities for your SAP cloud environment:  EMC ViPR SRM and VMware vCenter Operations with the Blue Medora plug-in.

Let me spend the rest of this blog post discussing with you one of the key tools integral to EHC for SAP: EMC ViPR SRM which provides comprehensive monitoring and alerting on not only the storage tier, but also on the compute tier and network tier.  For EHC for SAP however, ViPR SRM focuses primarily on the critical storage tier of the cloud infrastructure, including such critical components for long distance BC/DR such as EMC RecoverPoint.

People reason that since “it’s in the cloud”, there should no longer be any worry regarding storage since it’s now someone else’s problem, right?  Well that may be true if you are talking about a public cloud, but if it’s a private cloud running on your premises, then you DO in fact have to worry about the performance, stability, and availability of your storage platform.

And since it’s a cloud environment, your storage is a shared resource servicing hundreds or thousands of SAP virtual machines, which makes it even harder to pinpoint which SAP environment and virtual machines are being impacted by a particular problem.

EMC ViPR SRM offers not only unparalleled insight into the popular EMC storage platform for SAP such as the VMAX, VNX, and XtremIO (and some popular SAP storage platforms such as Hitachi, IBM, and others), but it can provide visibility into the network and compute tiers as well.  As previously mentioned, for EHC for SAP, ViPR SRM concentrates on monitoring the critical storage tier and you can drill down to the storage processor level and LUN level if needed as well as view the complex interaction of the data stores and the replication solutions for data protection and disaster recovery – you can easily perform root cause analysis to troubleshoot any problem and also have the necessary reporting to show that key SLAs (Service Level Agreements) have been met or even exceeded.

One could ask why anyone should care about a tool for visualizing, analyzing, and optimizing storage resources when one is running SAP in a cloud environment. Well, for a lot people, SAP on cloud typically means that it’s virtualized SAP running on VMware (the market share leader in SAP installations) and every VMware virtual machine needs a support VMDK data store which itself is a group of files!  So yes, a tool for visualizing, analyzing, and optimizing storage resources when one is running SAP in a cloud environment is not only relevant, it is an absolute necessity to assure availability, performance, and resiliency of the cloud infrastructure both in a private cloud setting as well as hybrid cloud setting, where long distance disaster recovery and workload relocation are key drivers for adoption.

I know that details in the screen shot below are hard to read, but I wanted to provide a “blurry” glimpse of the richness of the EMC ViPR SRM console as integrated into EHC for SAP – however, you can download the EHC for SAP white paper to get a better view or go to theEMC ViPR SRM page on EMC.com for more details.

SAP

In the screen shot, EMC ViPR SRM easily allows Cloud Administrators to drill down to the following components of the storage tier:

  1. Storage Capacity: this one is obvious, as you need to know if the cloud environments being hosted on a particular array will run out of space
  2. Storage Path Details: this data point is crucial to deal with performance issues due to bottleneck in the data storage path
  3. Storage Performance: another obvious one, which would be useful to redistribute storage work load as needed to provide scalable performance required in a cloud environment
  4. Performance CPU of the array front end processor or engine: this metric provides more details on how the array is behaving, useful in capacity planning and performance optimization
  5. Performance Memory of the array front end processor or engine: another needed metric to better understand how the array is behaving, useful in capacity planning and performance optimization
  6. Events: now this feature is essential to Cloud Admins so that they can be alerted in case of any issue which may impact the performance of the EHC for SAP.

There is no question that EMC ViPR SRM brings unmatched Alerting and Reporting capabilities, with several hundreds of counters and metrics for not only EMC storage arrays, but also for non-EMC arrays as well as Cisco servers and switches, Brocade equipment, VMware products, and more.

Three Key Observations From the Gartner Data Center, Infrastructure and Operations Management Conference

Brian Lett

Brian Lett

Senior Product Marketing Manager

I was fortunate enough to be part of the team that supported the EMC presence at the recent Gartner Data Center, Infrastructure and Operations Management Conference in Las Vegas earlier this month. Lots of hard work (briefings, meetings, staffing the expo booth) but also a great opportunity to speak with users and customers, as well as garner some interesting insights from the Gartner analyst-presented sessions.
DSCN0320

So what were some of the key themes I observed? First, the software-defined data center is moving a lot closer to reality for a lot of attendees. Key technologies such as software-defined storage and software-defined networking have moved for most from the “I’ll keep my eyes on it” bucket in 2014 into the “I’ve got to do something about this in 2015” bucket. That’s no surprise to our team; we’ve been observing a lot of the same behavior in our interactions with customers at places like executive briefings and user-group meetings. And it helped drive a lot of the insights we presented in our event-sponsor session on“Making the Software-Defined Data Center a Reality for Your Business,” in which the need for automation, especially at the management and monitoring level, was emphasized as a critical requirement to delivering on the promise of the software-defined data center.

Another key theme that had almost everyone talking was a notion of “bi-modal IT,” in which IT operations would simultaneously support both an agile, devops-like model for rapid iterations and deployment of newer applications and services, while also maintaining a “traditional” IT operations model for more traditional, less business-differentiating applications and services. In some ways, analysts had been alluding to this for years – devops was coming; it would be a major influential force; prepare for it. What was lacking was the “how,” and that confused and even scared people. But now at this event we learnedanalysts are saying to support both models (hence “bi-modal” IT), and, more importantly, deploy supporting systems and tools for each – and absolutely don’t try to use one system for both models (because nothing is out there that can do that effectively). Folks I spoke to almost had a concurrent sense of relief: Two modes, each with their own tools and systems, makes sense to everyone, and eliminates that angst associated with potentially trying to make the round peg fit in a square hole. And since it came from this event, it has the inherent “validation” that many in upper management want.

DSCN0314 Building on this, the third theme I noticed (more from my interactions with other conference attendees, especially at the EMCexpo booth) was a strong interest in continuous availability of applications and systems, rather than in backing up and being able to recover these same environments. People were asking the right questions: For example, what kinds of storage architectures make sense in a continuous-availability model, and can those be aligned with changing data needs? (Yes, and EMC has a lot to offer on this front.) What are the key elements of a monitoring system that focuses on continuous availability? (One answer: automated root-cause and impact analysis, which radically shrinks time needed to identify problems, and is a key capability in the EMC Service Assurance Suite.) And can a server-based SAN play a role in continuous availability architecture? (Absolutely – as long as you’re managing it with EMC ScaleIO.)

And this event also had its share of the unexpected (the Las Vegas strip was fogged in – yes, that’s not afoggy_lv typo – for almost two full days), as well as lighter fun-filled moments (EMC’s arcade-themed hospitality suite for conference attendees, complete with a customized Pac-Man-like game called “ViPR Strike). And as always, it’s the discussions and interactions that I cherish and remember the most.

Which brings it back to you: Were you at the conference too? If so, what do you think of these higher-level observations of mine? What else do you have to add or share? Even if you didn’t go, what are your thoughts and opinions on what you’ve read here?