Posts Tagged ‘cloud’

Survey findings show close alignment with Dell EMC strategy

Charles Sevior

Chief Technology Officer at EMC Emerging Technologies Division

Media Workflow Trends Survey-  Industry Transformation is Underway

Earlier in 2016, Dell EMC commissioned Gatepoint Research to conduct an extensive survey with Media Industry executives.  The survey, entitled Media Workflow Trends yielded some interesting results that point to a good understanding of the pace of change, and the need to stay agile for competitive advantage.

The results of that survey are summarised in a new Infographic, which apart from being much more interesting than a series of pie charts brings to the surface the key themes that align with the technology development strategy of Dell EMC.

Content Storage Demands Are Exploding

I have worked in the media industry for decades, and so this is hardly a surprising finding.  Early in my career, it was commonplace to find production offices full of shelves and compactus storage units.  These were crammed with videotapes. Then there were boxes stacked everywhere – also full of tapes with titles scrawled on the back.  There were colour-coded stickers – “Master”, “Protection Master”, “Edit Copy”, “HOLD”… There was a warehouse full of tapes of various types, even old films.  One thing you learned, is that nothing was ever thrown away (but plenty of things went missing).

Fast-forward to 2016, and most media companies involved in production and distribution of content have shifted to file-based Media Asset Management systems – or at least a media content archive repository.  This has helped to contain the data sprawl into a central location, but it has done nothing to reduce the total storage capacity requirement.  Think about the increasing resolution of content, the increasing number of channels, multiple versions for different delivery platforms and of course the increasing “shoot to use” ratio.  Sports events have increasing number of cameras with retained ISO recordings for highlights and post-match inquiries, Reality TV formats are based on multi-cam techniques to get every reaction from different angles.  Whilst these programs are in production, the storage capacity demands can skyrocket.

Only 3% of our survey respondents replied that storage needs are flat or negative – and 50% responded that the demand for storage capacity is growing rapidly and a major concern.

Multiplatform Content Delivery

Pretty much every major media company is either doing this already, or has a plan to extend their audience reach beyond simple linear broadcast channels in the next few years.  But what is interesting is the increasingly careful way in which media companies are deploying their solutions.

Recognising that the simple approach of outsourcing multiplatform content delivery to a third-party OVP (Online Video Platform) is not very revenue accretive, Media companies are now starting to embrace DIY in order to pull-back some profit margin in what is otherwise a very difficult to monetise delivery strategy.  As we learn more from some of the leaders in this industry – such as MLBAM – we can see the benefits in taking control and managing as much of the content delivery process end to end.  Just like we always did with linear content delivery over terrestrial RF transmitters, satellite transponders and cable TV networks.

One of the key tips is being ready to scale.  As streaming demand spikes and grows with popular content, how can every incremental viewer bring incremental profit – not just rising CDN costs?  Taking a tip from Netflix, you can build a distributed origin and control the CDN deeper into the delivery network.  Dell EMC has repeatedly partnered with some of the leading solution vendors in this space, who make it easier to deploy a well-managed and profitable multiplatform content delivery system.

IP-Based Workflows are here

Most industry commentators seem to get pretty excited about “the death of SDI”, and how soon IP networking can completely replace the dedicated video & audio circuits of the past.  But really, that is just a side show for which we will soon lose interest.  There is no “right or wrong” way to build a media facility.  The engineers and technical architects will select the appropriate technology on a case by case basis as they always have, based on reliability, quality, cost, ease of management etc.  And over time, there will simply be more connections made using IP network technology and fewer using dedicated single-purpose technology.

But what is the end-game?  I see it as moving our media equipment technology stacks (also known as the “rack room” or “central technical facility”) away from dedicated single-purpose vendor solutions built and managed carefully by Broadcast Engineers into a flexible virtualised technology stack that looks identical to a cloud-scale data centre – built and managed by IT and Media Technologists.  It will be open architecture, built on software-defined principles and capable of easy repurposing as the application technology needs of the business shift more frequently than they did in the past.

It is important to select your partners carefully as you make this transition into IP and software-defined.  Dell EMC has deliberately remained vendor neutral and standards-based.  We have aligned with SMPTE and AIMS who we believe are two organisations that have the broad interests of the industry (both end-users and vendors) at heart, and will result in practical, cost-effective and widely-adopted solutions.

As a pioneer and leader in scale-out storage, virtualisation and converged infrastructure, Dell EMC is in a great position to help you avoid costly mistakes during your transition to IP-based workflows.

EMC-Media and Entertainment-Infographic

Click to see the full M&E trends infographic

Ultra-HD Is Coming

Well, it’s already here.  Of course most people shopping for a new flat screen TV today will see that their options include 4K resolution, and are increasingly affordable when compared to the default HD TV resolution.  Some in the industry will say that 4K is unnecessary and is being pushed by the consumer electronics manufacturers – but when has that ever been a different story in the past?  There is no doubt that consumers appreciate improved quality of content, and story-tellers love the creative opportunities afforded by the latest technology.  When we can finally deliver ALL of the aspects of Ultra-HD, such as HDR (high dynamic range), HFR (high frame rates) and multi-channel surround sound that will be one step closer to reality.

At the SMPTE Future of Cinema Keynote during NAB 2016, pioneering movie Director Ang Lee said;

Technology must work for us to help tell the human story.  Whether it is from 2K to 4K, or 24 to 60fps, it improves the sensory experience and as a viewer, you become more relaxed and less judgmental.  We will always be chasing god’s work – which is the natural vision and sensory experience. We are getting closer and learning more about how we communicate with each other.”

In the world of content creation and media distribution, we will increasingly adopt 4K cameras, render graphics and animations at increased resolution and ensure the product we make has an increased shelf life.  This is natural, even if it is happening before we have an ability to deliver this content to our viewers.  And while it is difficult to “rip and replace” cable, satellite and terrestrial networks that are still only shifting from SD to HD with new 4K solutions, OTT content delivery using internet broadband and mobile networks will probably be the way most consumers first access Ultra-HD.

Dell EMC Isilon is a scale-out storage solution that grows in capacity and bandwidth as more nodes combine into a single-volume multi-tier cluster.  We already have numerous customers using Isilon for 4K editing and broadcast today.  As we constantly innovate and bring new technology to market, we continue to deliver to our customers the benefits of Moore’s Law.  The real key to Isilon technology is the way that we deliver platform innovation in an incremental and backward-compatible way – supporting the ability to scale and grow non-disruptively.

Beyond LTO Archiving

I mentioned earlier in this blog how my early career was defined by shelves and boxes of tapes – videotapes everywhere.  I spent time in my day handling tape, winding tape into cartridges, even editing audio and videotape using a razor blade!  The most important machine in the building (a commercial TV station) was the cart machine.  That was because it held all of the commercial 30 second spots, and if those did not play, the TV station did not make money and we would not get paid.

Finally we replaced cart machines and replay videotape machines with hard disk servers that were highly reliable, fast to respond to late changes and very flexible.  So I wonder when we will say it is time to replace the data tape archive library with a cloud store?  Certainly we are all familiar with and probably daily users of one of the biggest media archives in the world (I refer to Google’s YouTube).  Wouldn’t it be great if your company had its own YouTube?  A content repository that was always online, instantly searchable, growing with fresh material and just as easy to use?

So then we get down to cost.  It turns out, that even though they seem cheap, the cost of actually using a public cloud store for long term retention is a lot more expensive than existing data tape technology – especially as the LTO industry brings innovation beyond LTO-6 into the latest LTO-7 data tape format with 6TB native capacity.

But that migration process to move all of your media from one standard to the next is painful and time-consuming – introducing cost, wear & tear and impacting on end-user search & retrieval times from the library.

From our survey respondents, the top features for consideration of a storage solution are performance, scalable capacity and efficient use of resources (floor space, power, personnel).  So if we took those criteria into account, cloud storage should win hands-down – if only the price was right.

Well finally now it is.  Dell EMC has been developing an innovative product called ECS (Elastic Cloud Storage) which meets all of the requirements of a Modern Archive – scalable, multi-site geo-replication, open architecture, software-defined.  And now it is available in a range of hardware platforms that offer the high packing density of large capacity and very efficient hard drives – today 8TB is supported and clearly that native capacity will grow.

Increasingly customers are asking us whether this technology is price competitive with LTO libraries, and whether it is reliable and ready for mission-critical high-value archives.  The answer to both of these questions is yes, and the benefits of moving to your own cloud store are significant (whether you choose to deploy it within your own premises or have it hosted for you).

Cloud Solutions are gathering converts

When you boil it all down, our industry is in transformation from a legacy & bespoke architecture to that of a cloud. The great thing about a cloud, is that it is flexible and can easily change shape, scale and take on new processes and workloads.  And it doesn’t have to be the public cloud.  It can be “your cloud”.  Or it can be a mix of both – which really gives you the best of both worlds.  Public cloud for burst, private cloud for base load and deterministic performance.

Building clouds and bringing technology innovation to industry is what Dell EMC is really good at.  Speak with us to learn more about how to embark on this journey and the choices available to you.

SUMMARY

So we find that across the media industry the evolution is underway.  This is a multi-faceted transformation.  We are not just switching from “SD to HD”, we are actually evolving at the business, operations, culture and technology level.

Dell EMC is positioned as an open architecture vendor neutral infrastructure provider offering best in class storage, servers, networking, workstations, virtualisation and cloud management solutions.  Engage with us to secure your infrastructure foundation, to be future-ready, and to simplify your technology environment so that you can focus on what really matters to your business – what makes your offering attractive to viewers (on any platform)

 

trends-picture-1

Breakfast with ECS: Files Can’t Live in the Cloud? This Myth is BUSTED!

Welcome to another edition of Breakfast with ECS, a series where we take a look at issues related to cloud storage and ECS (Elastic Cloud Storage), EMC’s cloud-scale storage platform.

The trends towards increasing digitization of content and towards cloud based storage have been driving a rapid increase in the use of object storage throughout the IT industry.  However, while it may seem that all applications are using Web-accessible REST interfaces on top of cloud based object storage, in reality, while new applications are largely being designed with this model, file based access models remain critical for a large proportion of the existing IT workflows.

Given the shift in the IT industry towards object based storage, why is file access still important?  There are several reasons for this, but they boil down to two fundamental reasons:

  1. There exists a wealth of applications, both commercial and home-grown, that rely on file access, as it has been the dominant access paradigm for the past decade.
  2. It is not cost effective to update all of these applications and their workflows to use an object protocol. The data set managed by the application may not benefit from an object storage platform, or the file access semantics may be so deeply embedded in the application that the application would need a near rewrite to disentangle it from the file protocols.

What are the options?

The easiest option is to use a file-system protocol with an application that was designed with file access as its access paradigm.

ECS - Beauty FL_resizedECS has supported file access natively since its inception, originally via its HDFS access method, and most recently via the NFS access method.  While HDFS lacks certain features of true file system interfaces, the NFS access method has full support for applications and NFS clients are a standard part of any OS platform, thus making NFS the logical choice for file based application access.

Via NFS, applications gain access to the many benefits of ECS, including its scale-out performance, the ability to massively multi-thread reads and writes, the industry leading storage efficiencies, and the ability to support multi-protocol access, e.g. ingesting data from a legacy application via NFS while also supporting data access over S3 for newer, mobile application clients and thus supporting next generation workloads at a fraction of the cost of rearchitecting the complete application.

Read the NFS on ECS Overview and Performance White Paper for a high level summary of version 3 of NFS with ECS.

An alternative is to use a gateway or tiering solution to provide file access, such as CIFS-ECS, Isilon CloudPools, or third-party products like Panzura or Seven10.  However, if ECS supports direct file-system access, why would an external gateway ever be useful?  There are several reasons why this might make sense:

  • An external solution will typically support a broader range of protocols, including things like CIFS, NFSv4, FTP, or other protocols that may be needed in the application environment.
  • The application may be running in an environment where the access to the ECS is over a slow WAN link. A gateway will typically cache files locally, thereby shielding the applications from WAN limitations or outages while preserving the storage benefits of ECS.
  • A gateway may implement features like compression, thereby either reducing WAN traffic to the ECS, thus providing direct cost savings on WAN transfer fees, or encryption, thus providing an additional level of security for the data transfers.
  • While HTTP ports are typically open across corporate or data center firewalls, network ports for NAS (NFS, CIFS) protocols are normally blocked for external traffic. Some environments, therefore, may not allow direct file access to an ECS which is not in the local data center, though a gateway which provides file services locally and accesses ECS over HTTP would satisfy the corporate network policies.

So what’s the right answer?

The there is no one right answer; instead, the correct answer will depend on the specifics of the environment and of the characteristics of the application.

  • How close is the application to the ECS? File system protocols work well over LANs and less well over WANs.  For applications that are near the ECS, a gateway is an unnecessary additional hop on the data path, though 3d Kugel mit Fragezeichen im Labyrinthgateways can give an application the experience of LAN local traffic even for a remote ECS.
  • What are the application characteristics? For an application that makes many small changes to an individual file or a small set of files, a gateway can consolidate multiple such changes into a single write to ECS.  For applications that more generally write new files or update existing files with relatively large updates (e.g. rewriting a PowerPoint presentation), a gateway may not provide much benefit.
  • What is the future of the application? If the desire is to change the application architecture to a more modern paradigm, then files on ECS written via the file interface will continue to be accessible later as the application code is changed to use S3 or Swift.  Gateways, on the other hand, often write data to ECS in a proprietary format, thereby making the transition to direct ECS access via REST protocols more difficult.

As should be clear, there is no one right answer for all applications.  The flexibility of ECS, however, allows for some applications to use direct NFS access to ECS while other applications use a gateway, based on the characteristics of the individual applications.

If existing file based workflows were the reason for not investigating the benefits of an ECS object based solution, then rest assured that an ECS solution can address your file storage needs while still providing the many benefits of the industry’s premier object storage platform.

Want more ECS? Visit us at www.emc.com/ecs or try the latest version of ECS for FREE for non-production use by visiting www.emc.com/getecs.

Cloud Computing and EDA – Are we there yet?

Lawrence Vivolo

Sr. Business Development Manager at EMC²

Cloud 9Today anything associated with “Cloud” is all the rage.  In fact, depending on your cellular service provider, you’re probably already using cloud storage to back up your e-mail, pictures, texts, etc. on your cell phone. (I realized this when I got spammed with “you’re out of cloud space – time to buy more” messages). Major companies that offer cloud-based solutions (servers, storage, infrastructure, applications, management, etc.) include Microsoft, Google, Amazon, Rackspace, Dropbox, EMC and others. For those that don’t know the subtleties of Cloud, and the terms, like Public vs Private vs Hybrid vs Funnel, and why some are better suited for EDA, I thought I’d give you some highlights.

Let’s start with the obvious – what is “Cloud”? Cloud computing is a collection of resources which can include servers (for computing), storage, applications, infrastructure (ex: networking) and even services (management, backups, etc.). Public clouds are simply clouds that are made available by 3rd-parties and are shared resources. Being shared is often advertised as a key advantage of public cloud – because the resources are shared, so is the cost. These shared resources can also expand and contract as needs change, allowing companies to precisely balance need with availability.  Back in 2011, Synopsys, a leading EDA company, was promoting this as a means to address peak EDA resource demand [1].

Unfortunately, public cloud has some drawbacks.  The predictability of storage cost is one. Though public cloud appears very affordable at first glance, most providers charge for the movement of data to and from their cloud, which can exceed the actual costs to store the data.  This can be further compounded when data is needed worldwide as it may need to be copied to multiple regions for performance and redundancy purposes. With semiconductor design, these charges can be significant, since many EDA programs generate lots of data.

Perhaps the greatest drawback to EDA adoption of public cloud is the realization that your data might be sitting on physical compute and/or storage resources that are being shared with someone else’s data.  That doesn’t mean you can see other’s data. Access is restricted via OS policy and other security measures. Yet that does create a potential path for unauthorized access. As a result, most semiconductor companies have not been willing to risk the potential to have their most important “golden jewels” (their IP) hacked and stolen from a public cloud environment. Security has improved since 2011, however, and some companies are considering cloud for long-term archiving of non-critical data as well as some less business critical IP.

Private cloud avoids these drawbacks, as it isolates the physical infrastructure – including hardware, storage and networking – from all other users. Your own company’s on-premise hardware is typically a private cloud, even though, increasingly, some of that “walled-off” infrastructure is itself located off-premise and/or owned and managed by a 3rd party. While physical and network isolation reduce the security concerns, they also eliminates some of the flexibility. The number of servers available can’t be increased or decreased with a single key-click to accommodate peak demand changes, at least not without upfront planning and additional costs.

Hybrid cloud is another common term – which simply means a combination of public and private clouds.

In the world of semiconductor design, private cloud as a service has been available for some time and is offered in various forms by several EDA companies today. Cadence® Design Systems, for example, offers both Hosted Design Solutions [2], which includes HW, SW and IT infrastructure, and QuickCycles® Service which offers on-site or remote access to Palladium emulation and simulation acceleration resources [3]. Hybrid cloud is also starting to gain interest, where non-critical data that’s infrequently accessed can be stored with minimal transport costs.

The public cloud market is changing constantly and as time progresses new improvements may arise that make it more appealing to EDA. A challenge of IT administrators today is meeting today’s growing infrastructure needs while avoiding investments that are incompatible with future cloud migrations. This is where you need to hedge your bets and chose a platform that delivers the performance and flexibility EDA companies require, yet enables easy migration from private to hybrid—or even public cloud. EMC’s Isilon, for example, is an EDA-proven high performance network-attached storage platform that provides native connectivity to the most popular public cloud providers, including Amazon Web Services, Microsoft Azure and EMC’s Virtustream.

Not only does native cloud support future-proof today’s storage investment, it makes the migration seamless – thanks to its single point of management that encompasses private, hybrid and public cloud deployments. EMC Isilon supports a feature called CloudPools, which transparently extends an Isilon storage pool into cloud infrastructures. With CloudPools your company’s critical data can remain on-premise yet less critical, rarely accessed data can be encrypted securely and archived automatically and transparently onto the cloud. Isilon can also be configured to archive your business-critical data (IP) to lower-cost on-premise media.  This combination saves budget and keeps more high-performance storage space available locally for your critical EDA jobs.

Semiconductor companies and EDA vendors have had their eyes on public cloud for many years. While significant concerns over security continue to slow adoption, technology continues to evolve. Whether your company ultimately sticks with private cloud, or migrates seamlessly to hybrid or public cloud in the future depends on decisions you make today. The key is to focus on flexibility, and not let fear cloud your judgment.

[1] EDA in the Clouds: Myth Busting: https://www.synopsys.com/Company/Publications/SynopsysInsight/Pages/Art6-Clouds-IssQ2-11.aspx?cmp=Insight-I2-2011-Art6

[2] Cadence Design Systems Hosted Design Services: http://www.cadence.com/services/hds/Pages/Default.aspx

[3] Cadence Design System QuickCycles Service: http://www.cadence.com/products/sd/quickcycles/pages/default.aspx

Breakfast with ECS: The Swiss Army Knife of Cloud Solutions

Corey O'Connor

Senior Product Marketing Manager at Dell EMC² ETD

Welcome to another edition of Breakfast with ECS, a series where we take a look at issues related to cloud storage and ECS (Elastic Cloud Storage), EMC’s cloud-scale storage platform.

ECS Cloud Enabling ToolsA Swiss army knife is a multi-layered tool equipped with a variety of attachments that can serve up many different functions. When first introduced in the late 1880s, it revolutionized the way soldiers performed their daily tasks – anything from disassembling service rifles to opening up canned rations in the field.  Fast forward to 2016, the use of the Swiss army knife may have changed quite a bit but the initial concept of consolidating various components into a single multi-purpose tool has certainly influenced organizations and industries across the world.

EMC’s Elastic Cloud Storage (ECS) is without question the Swiss army knife for cloud solutions.  ECS revolutionizes storage management by consolidating varied workloads for object, file, and HDFS into a single, unified system.  You can manage both traditional and ‘next-gen’ or ‘cloud-native’ applications on a platform that spans geographies and acts as a single logical resource.  Just like a Swiss army knife, ECS can maximize capacity by packing a lot into a tiny space.  ECS Appliance can squeeze in sixty 8TB drives into a standard 4U DAE with up to 4PB of storage in single rack, for a highly dense platform with a very economical data center footprint.

(more…)

Breakfast with ECS: Doubling Down on Docker

Corey O'Connor

Senior Product Marketing Manager at Dell EMC² ETD

Welcome to another edition of Breakfast with ECS, a series where we take a look at issues related to cloud storage and ECS (Elastic Cloud Storage), EMC’s cloud-scale storage platform.

Unless you’ve been living under a rock I’m sure you’ve heard of Docker at this point. If you haven’t, it’s time to dust yourself off and understand that Docker containers will wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. Genius right? Docker_1

The usage of containers has been around quite some time now, but the extra juice worth squeezing came from Docker’s ability to provide total isolation of resources to package and automate applications more effectively than ever before.  Docker provides system administrators and developers the ability to package any kind of software with all its dependencies into a container. Simply put, this resource efficiency standardizes each container and promotes massive scalability – this plugs in very nicely for cloud-scale, geo-distributed systems such as EMC’s Elastic Cloud Storage (ECS). In the early stages of product development, EMC took an early bet on Docker containers and it certainly has proved to payoff.

(more…)

Categories

Archives

Connect with us on Twitter