Posts Tagged ‘Virtualization’

Survey findings show close alignment with Dell EMC strategy

Charles Sevior

Chief Technology Officer at EMC Emerging Technologies Division

Media Workflow Trends Survey-  Industry Transformation is Underway

Earlier in 2016, Dell EMC commissioned Gatepoint Research to conduct an extensive survey with Media Industry executives.  The survey, entitled Media Workflow Trends yielded some interesting results that point to a good understanding of the pace of change, and the need to stay agile for competitive advantage.

The results of that survey are summarised in a new Infographic, which apart from being much more interesting than a series of pie charts brings to the surface the key themes that align with the technology development strategy of Dell EMC.

Content Storage Demands Are Exploding

I have worked in the media industry for decades, and so this is hardly a surprising finding.  Early in my career, it was commonplace to find production offices full of shelves and compactus storage units.  These were crammed with videotapes. Then there were boxes stacked everywhere – also full of tapes with titles scrawled on the back.  There were colour-coded stickers – “Master”, “Protection Master”, “Edit Copy”, “HOLD”… There was a warehouse full of tapes of various types, even old films.  One thing you learned, is that nothing was ever thrown away (but plenty of things went missing).

Fast-forward to 2016, and most media companies involved in production and distribution of content have shifted to file-based Media Asset Management systems – or at least a media content archive repository.  This has helped to contain the data sprawl into a central location, but it has done nothing to reduce the total storage capacity requirement.  Think about the increasing resolution of content, the increasing number of channels, multiple versions for different delivery platforms and of course the increasing “shoot to use” ratio.  Sports events have increasing number of cameras with retained ISO recordings for highlights and post-match inquiries, Reality TV formats are based on multi-cam techniques to get every reaction from different angles.  Whilst these programs are in production, the storage capacity demands can skyrocket.

Only 3% of our survey respondents replied that storage needs are flat or negative – and 50% responded that the demand for storage capacity is growing rapidly and a major concern.

Multiplatform Content Delivery

Pretty much every major media company is either doing this already, or has a plan to extend their audience reach beyond simple linear broadcast channels in the next few years.  But what is interesting is the increasingly careful way in which media companies are deploying their solutions.

Recognising that the simple approach of outsourcing multiplatform content delivery to a third-party OVP (Online Video Platform) is not very revenue accretive, Media companies are now starting to embrace DIY in order to pull-back some profit margin in what is otherwise a very difficult to monetise delivery strategy.  As we learn more from some of the leaders in this industry – such as MLBAM – we can see the benefits in taking control and managing as much of the content delivery process end to end.  Just like we always did with linear content delivery over terrestrial RF transmitters, satellite transponders and cable TV networks.

One of the key tips is being ready to scale.  As streaming demand spikes and grows with popular content, how can every incremental viewer bring incremental profit – not just rising CDN costs?  Taking a tip from Netflix, you can build a distributed origin and control the CDN deeper into the delivery network.  Dell EMC has repeatedly partnered with some of the leading solution vendors in this space, who make it easier to deploy a well-managed and profitable multiplatform content delivery system.

IP-Based Workflows are here

Most industry commentators seem to get pretty excited about “the death of SDI”, and how soon IP networking can completely replace the dedicated video & audio circuits of the past.  But really, that is just a side show for which we will soon lose interest.  There is no “right or wrong” way to build a media facility.  The engineers and technical architects will select the appropriate technology on a case by case basis as they always have, based on reliability, quality, cost, ease of management etc.  And over time, there will simply be more connections made using IP network technology and fewer using dedicated single-purpose technology.

But what is the end-game?  I see it as moving our media equipment technology stacks (also known as the “rack room” or “central technical facility”) away from dedicated single-purpose vendor solutions built and managed carefully by Broadcast Engineers into a flexible virtualised technology stack that looks identical to a cloud-scale data centre – built and managed by IT and Media Technologists.  It will be open architecture, built on software-defined principles and capable of easy repurposing as the application technology needs of the business shift more frequently than they did in the past.

It is important to select your partners carefully as you make this transition into IP and software-defined.  Dell EMC has deliberately remained vendor neutral and standards-based.  We have aligned with SMPTE and AIMS who we believe are two organisations that have the broad interests of the industry (both end-users and vendors) at heart, and will result in practical, cost-effective and widely-adopted solutions.

As a pioneer and leader in scale-out storage, virtualisation and converged infrastructure, Dell EMC is in a great position to help you avoid costly mistakes during your transition to IP-based workflows.

EMC-Media and Entertainment-Infographic

Click to see the full M&E trends infographic

Ultra-HD Is Coming

Well, it’s already here.  Of course most people shopping for a new flat screen TV today will see that their options include 4K resolution, and are increasingly affordable when compared to the default HD TV resolution.  Some in the industry will say that 4K is unnecessary and is being pushed by the consumer electronics manufacturers – but when has that ever been a different story in the past?  There is no doubt that consumers appreciate improved quality of content, and story-tellers love the creative opportunities afforded by the latest technology.  When we can finally deliver ALL of the aspects of Ultra-HD, such as HDR (high dynamic range), HFR (high frame rates) and multi-channel surround sound that will be one step closer to reality.

At the SMPTE Future of Cinema Keynote during NAB 2016, pioneering movie Director Ang Lee said;

Technology must work for us to help tell the human story.  Whether it is from 2K to 4K, or 24 to 60fps, it improves the sensory experience and as a viewer, you become more relaxed and less judgmental.  We will always be chasing god’s work – which is the natural vision and sensory experience. We are getting closer and learning more about how we communicate with each other.”

In the world of content creation and media distribution, we will increasingly adopt 4K cameras, render graphics and animations at increased resolution and ensure the product we make has an increased shelf life.  This is natural, even if it is happening before we have an ability to deliver this content to our viewers.  And while it is difficult to “rip and replace” cable, satellite and terrestrial networks that are still only shifting from SD to HD with new 4K solutions, OTT content delivery using internet broadband and mobile networks will probably be the way most consumers first access Ultra-HD.

Dell EMC Isilon is a scale-out storage solution that grows in capacity and bandwidth as more nodes combine into a single-volume multi-tier cluster.  We already have numerous customers using Isilon for 4K editing and broadcast today.  As we constantly innovate and bring new technology to market, we continue to deliver to our customers the benefits of Moore’s Law.  The real key to Isilon technology is the way that we deliver platform innovation in an incremental and backward-compatible way – supporting the ability to scale and grow non-disruptively.

Beyond LTO Archiving

I mentioned earlier in this blog how my early career was defined by shelves and boxes of tapes – videotapes everywhere.  I spent time in my day handling tape, winding tape into cartridges, even editing audio and videotape using a razor blade!  The most important machine in the building (a commercial TV station) was the cart machine.  That was because it held all of the commercial 30 second spots, and if those did not play, the TV station did not make money and we would not get paid.

Finally we replaced cart machines and replay videotape machines with hard disk servers that were highly reliable, fast to respond to late changes and very flexible.  So I wonder when we will say it is time to replace the data tape archive library with a cloud store?  Certainly we are all familiar with and probably daily users of one of the biggest media archives in the world (I refer to Google’s YouTube).  Wouldn’t it be great if your company had its own YouTube?  A content repository that was always online, instantly searchable, growing with fresh material and just as easy to use?

So then we get down to cost.  It turns out, that even though they seem cheap, the cost of actually using a public cloud store for long term retention is a lot more expensive than existing data tape technology – especially as the LTO industry brings innovation beyond LTO-6 into the latest LTO-7 data tape format with 6TB native capacity.

But that migration process to move all of your media from one standard to the next is painful and time-consuming – introducing cost, wear & tear and impacting on end-user search & retrieval times from the library.

From our survey respondents, the top features for consideration of a storage solution are performance, scalable capacity and efficient use of resources (floor space, power, personnel).  So if we took those criteria into account, cloud storage should win hands-down – if only the price was right.

Well finally now it is.  Dell EMC has been developing an innovative product called ECS (Elastic Cloud Storage) which meets all of the requirements of a Modern Archive – scalable, multi-site geo-replication, open architecture, software-defined.  And now it is available in a range of hardware platforms that offer the high packing density of large capacity and very efficient hard drives – today 8TB is supported and clearly that native capacity will grow.

Increasingly customers are asking us whether this technology is price competitive with LTO libraries, and whether it is reliable and ready for mission-critical high-value archives.  The answer to both of these questions is yes, and the benefits of moving to your own cloud store are significant (whether you choose to deploy it within your own premises or have it hosted for you).

Cloud Solutions are gathering converts

When you boil it all down, our industry is in transformation from a legacy & bespoke architecture to that of a cloud. The great thing about a cloud, is that it is flexible and can easily change shape, scale and take on new processes and workloads.  And it doesn’t have to be the public cloud.  It can be “your cloud”.  Or it can be a mix of both – which really gives you the best of both worlds.  Public cloud for burst, private cloud for base load and deterministic performance.

Building clouds and bringing technology innovation to industry is what Dell EMC is really good at.  Speak with us to learn more about how to embark on this journey and the choices available to you.

SUMMARY

So we find that across the media industry the evolution is underway.  This is a multi-faceted transformation.  We are not just switching from “SD to HD”, we are actually evolving at the business, operations, culture and technology level.

Dell EMC is positioned as an open architecture vendor neutral infrastructure provider offering best in class storage, servers, networking, workstations, virtualisation and cloud management solutions.  Engage with us to secure your infrastructure foundation, to be future-ready, and to simplify your technology environment so that you can focus on what really matters to your business – what makes your offering attractive to viewers (on any platform)

 

trends-picture-1

All roads lead to … Hyperconvergence

There are a series of trends which have determined the overall direction of the IT industry over the past few decades.  By understanding these trends and projecting their continued effect on the data center, applications, software, and users, it is possible to capitalize on the overall direction of the industry and to make intelligent decisions about where IT dollars should be invested or spent.

This blog looks at the trends of increasing CPU power, memory size, and demands on storage scale, resiliency, and efficiency, and examines how the logical outcome of these trends are the hyperconverged architectures which are now emerging and which will come to dominate the industry.

The storage industry is born
Storage 2In the 90s, computer environments started to specialize with the emergence of storage arrays such as CLARiiON and Symmetrix. This was driven by the demand for storage resiliency, as applications needed data availability levels beyond that offered by a single disk drive. As CPU power remained a constraining factor, moving the storage off the main application computer freed up computing power for more complex protection mechanisms, such as RAID 5, and meant that more specialized components could be integrated to enable features such as hot-pull and replace of drives, as well as specialized HW components to optimize compute intensive RAID operations.

Throughout the 90s and into the 2000s, as storage, networking, and computing capabilities continued to increase, there were a series of treadmill improvements in storage, including richer replication capabilities, dual-disk failure tolerant storage schemes, faster recovery times in case of an outage, and the like. Increasingly these features were implemented purely in software, as there was sufficient CPU capacity for these more advanced algorithms, and software features were typically quicker to market, easier to upgrade, and more easily fixed in the field.

A quantum leap forwardLeap 3
The next architectural advance in storage technologies came in the early 2000s with the rise of scale-out storage systems. In a scale-out system, rather than rely on a small number of high-performance, expensive components, the system is composed of many lower end, cheaper components, all of which cooperate in a distributed fashion to provide storage services to applications. For the vast majority of applications, even these lower end components are more than sufficient to satisfy the application’s needs, and load from multiple applications can be distributed across the scaled out elements, allowing a broader, more diverse application load than a traditional array can support . As there may be 100 or more such components clustered together, the overall system can be driven at 80-90% of maximum load and still be able to deliver consistent application throughput despite the failure of multiple internal components, as the failure of any individual component has only a small effect on the overall system capability. The benefits and validity of the scale-out approach was first demonstrated with object systems, with scale-out NAS and scale-out block offerings following shortly thereafter.
(more…)

Data and the Path Best Traveled

Powerpath_1You’ve spent a lot of time and money to deliver the best application performance and availability to the business. Your efforts have included investments in the latest technology regarding storage arrays, servers, and switches. This has all been required to deliver a powerful virtual environment that will handle the needs of the business now and with an eye towards the future. You are confident that scaling to thousands of virtual machines per server won’t be an issue. But the question remains, “Is my environments capability being maximized”?

What is I/O Multipath Management…..Why Should I Care?
I/O multipathing is the ability to manage, load balance, and queue up multiple I/O paths in both physical and virtual environments. The problem faced especially with virtual environments is that as the rate of consolidation grows, the efficiency of path management affects both performance and ultimately application availability. Think of your I/O multipath management software as the “traffic cop” that is constantly directing I/O and balancing loads to and from hosts, switches and SAN’s. This job is especially critical in virtualized environments running intensive OLTP (Online Transaction Processing) applications such as SQL Server, Oracle, Virtual Desktop, and Cloud Services. As you add more virtual machines, native multipathing applications have difficulty scaling and this often manifests as latency or outages in the field.
Let me illustrate a “real world” example of a problem that a customer faced. This customer had been experiencing performance issues with a large Oracle environment that had been virtualized. The problems started as an intermittent issue that was causing less than average I/O performance. The customer was having a great deal of difficulty diagnosing the root cause of the problem which was having a negative impact on live production. After examining hardware, switches, and cabling, the problem could still not be found. After all, at times things were running great but then bottlenecks would develop causing latency and temporary “data unavailable” to field offices. This all came to a head one day when the database crashed due to corruption in the tables which was replicated across live production. It ultimately took the customer almost four days to recover the database and restore production. The culprit turned out to be I/O errors which had developed as the paths were being managed using a “round-robin” multipathing method. Unfortunately the multipathing software did not have the intelligence to stop sending I/O down an intermittent path that ultimately went bad. The result was data corruption and a costly outage.

Intelligent Multipathing/Automation vs. Round Robin methods
So how could this disaster have been avoided? Well to give the customer credit, they had done all of the right things configuring their VMware environment as recommended with native multipathing software. The issue is that the software did not have the capability to recognize a failing data path. The software had been configured for a “Round Robin” load balancing method. In this method, I/O’s are sent to the next available path regardless if that path is the “best path”.

Powerpath_2In this case the I/O was being sent down a faulty path causing the data corruption. So how could this have outage been prevented? It could have been prevented by deploying an “Intelligent” multipathing solution likeEMC PowerPath/VE. PowerPath/VE is an advanced software product which “intelligently” manages I/O paths and uses path testing and diagnostics to recognize “flaky” or failed paths. In the event of a problem, PowerPath/VE routes around the bad paths without missing a beat.

The difference between PowerPath/VE and native multipathing is that PowerPath/VE incorporates patented algorithms which “intelligently” direct I/O to the most efficient path, based on diagnostics and testing, instead of just queuing to the next “available” path. The result is higher performance and increased application availability. 1 PowerPath/VE delivers:

• Intelligent multipathing and I/O load balancing
• Seamless path failover and recovery
• Path optimization designed to enhance application performance with EMC VMAX/VNX/VNXe/XtremIO
• Proactive fault detection to prevent interruption to data access
• Reduced complexity and automation: “Set it and forget it” self-management
• End-to-end I/O visibility across the virtual infrastructure

PowerPath/VE Multipathing integrates path testing and diagnostics to “intelligently” send I/O down the best path while routing around flaky or failed paths. It also takes those paths out of the queue and automatically returns those paths to “active” once they are healthy again.

Powerpath_3

What does “Intelligent I/O Multipathing” do for Performance?
So it’s pretty obvious what a product like PowerPath/VE can do for application availability, but what about performance? After all, one of the main challenges with a highly consolidated virtual environment is the performance degradation that can occur as you scale it up. Another key benefit of PowerPath/VE is the “EMC array performance optimization” that has been designed into its intelligence. PowerPath engineers have optimized the I/O formula to deliver high efficiency on EMC VNX, VNXe, VMAX and XtremIO arrays. This is accomplished by optimizing PowerPath/VE software with the proprietary microcode controlling these arrays. The result has been 2-3 the I/O performance when compared to native mulitpathing. In fact, as seen below, up to a 5X increase in I/O was delivered by PowerPath/VE on EMC’s XtremIO array compared to native mulitpathing.2

Powerpath_4

Summing “IT” up!
So when it comes to availability and performance, an intelligent I/O multipathing tool like EMC PowerPath/VE maximizes your existing virtual environment by “intelligently” managing and optimizing I/O. The result is maximum I/O performance and increased application availability allowing you to deliver better service through intelligence and automation. The bottom line is that I/O multipath management is critical to your virtual environment, and all management methods and products are not created equally. Spend some time to familiarize yourself with the basics; the data you save could be your own.

How The Data Center Is Becoming Software-Defined

How pervasive is the concept of software-defined?

In mid 2012 VMware CTO Steve Herrod and others began to articulate the concept of the software-defined data center. This concept was just as often received as a marketing position from vendors as an observation about the evolution of the data center. At the time, I blogged about the basic concepts of the software-defined data center and followed-up the initial post with an additional blog posts about storage challenges in the software-defined data center. Other posts addressed related topics such as how cloud adoption contributes to the evolution of APIs.

Now, since many months have passed, which can be measured in dog years in high-tech, I would like to revisit the concept of software-defined as it pertains to storage as well as compute and networking, and its status in 2013. I believe that the software-defined data center has moved beyond concept, putting us on the cusp of a time when new architectures and product offerings will make it a reality. (more…)

EMC Data Protection Advisor For As-A-Service Cloud Environments

What can you do to ensure data protection as you move to cloud?

Services-based storage, infrastructure, and data protection trends and technologies are recurring topics in this blog. Awhile back I wrote a post about enabling data protection as-a-service discussing the need for centralized management at cloud-scale, multiple service rates based on customer data protection needs or usage, and historical data for analysis and trending. The reality is that you can only get so far with legacy products built for physical environments. At some point, management tools, like the data center environments they support, need to be remade to the requirements of the day. Effective data protection solutions are no exception.

Data protection needs are more acute for as-a-service cloud models and require new approaches. Now, with the release of EMC Data Protection Advisor 6.0, I would like to share what it means to augment a successful data protection solution and extend it with a new distributed architecture and analysis engine to cloud deployments, without losing any usability benefits (i.e. without making it complex). (more…)

Keeping The Lights On In An SAP Environment

How do you keep your SAP costs under control in the modern data center?

As much as vendors like to think you spend all of your time planning architectural changes and your next purchase, the reality is more mundane. Mostly, data center personnel work around the clock to keep the lights on and business processes operational and fine-tuned to meet required service-level agreements (SLAs).

Keeping business processes operating at peak performance includes keeping mission-critical applications with their dependencies in good working order. Service-assurance for key applications such as SAP involves trained skilled staff, up-to-date database technologies, well-equipped test/dev environments, and effective data protection schemes. Maintaining service-levels thus costs money, which is not always plentiful.

Taking the right approach to data protection can not only ensure application availability but also keep personnel and licensing costs reasonable. Fortunately, for SAP environments, there is an approach that works well for physical and virtual environments today. So, in a brief departure from discussing the Software-Defined Data Center and clouds, this post focuses on keeping the data center lights on in SAP environments. (more…)

Categories

Archives

Connect with us on Twitter