Archive for the ‘Isilon’ Category

Buckets, Apps & Digital Exhaust…All in a Day’s Work For a Dell EMC Splunk Ninja

Cory Minton

Principal Systems Engineer at EMC

Grab your hoodies, your witty black t-shirts, and maybe your capes…it’s time for another exciting Splunk .conf, the annual Splunk User Conference taking place this week at the Walt Disney Swan and Dolphin Resort.  All of us at EMC are excited to be sponsoring .conf for the third year in a row, and this year our presence will be bigger and better than ever before. Dell EMC is hosting two technical sessions this year, we’ll have more than 20 of the Dell EMC Splunk Ninjas running around learning and a large booth in the partner pavilion demonstrating our technology solutions.. For all the details, check out our .conf16 site.

This year marks the beginning of a great relationship between two awesome tech businesses: Dell EMC and Splunk.  We joined forces through a formal strategic alliance that started in February.  This alliance enables Dell EMC and its partners to sell Splunk’s industry leading platform. And, it allows Dell EMC unique access to Splunk technical resources for solution design, testing, and validation.  Most importantly, it creates a framework for these two technology powerhouses to collaborate more effectively for customer success.

Why Dell EMC for Splunk?


When we talk about customer success, we mean it in two distinct ways: deploying Splunk on Dell EMC platforms and, using Splunk to derive value from Dell EMC infrastructure.

First, we believe success is deploying Splunk on a flexible infrastructure that not only helps Splunk run fast and efficiently, but also one that can scale easily as the usage of Splunk evolves in a customer organization.  We believe that converged and hyper-converged technologies powered by Dell EMC’s robust portfolio of storage technologies delivers on this vision and provide additional enterprise capabilities:

  • Cost effective & Optimized Storage – Dell EMC delivers optimized and efficient storage by aligning the right storage to Splunk’s hot, warm, and cold data long retention and varying performance requirements.
  • Flexible & Scale-Out capacity consumption model – Scale-out infrastructure to meet capacity and compute requirements independently or as a single, converged platform as per your data growth.
  • Data Reduction & other data Powerful Enterprise Capabilities – including secure encryption, compression & deduplication of indexes, and fast, efficient zero-overhead copies for protection.
    Bottom-less cold bucket – Scale-Out storage platforms, whether on premise or in the cloud, obviates the need for a frozen bucket by providing a PB-scale cold bucket solution, simplifying data management and making data always searchable.

splunk2

 

Splunk and Dell EMC engineering teams have engaged in a strategic collaboration to ensure that all Dell EMC platforms have been validated by Splunk to “meet or exceed Splunk’s published reference server hardware” guidelines.  The Splunk team takes this validation process very seriously and customers can rest assured that if they are considering infrastructure for your Splunk deployment, we have done extensive testing. Whether you are looking at hyper-converged solutions like VXRail or VxRack, converged solutions like VBlock systems, or just storage from EMC like ScaleIO, XtremIO, VNX, Unity, Isilon, or ECS, you can be confident that the work has been done by both Splunk and Dell EMC to make sure it runs well.

Secondly, we believe Splunk is an incredibly powerful platform for capturing and deriving value from machine data.  As it turns out, Dell EMC products spin off a massive amount of “digital exhaust” that can be captured easily and used to drive operational intelligence in IT.  Dell EMC has made massive investments over the last few years to build apps for our platforms and make them available in Splunkbase for free.  We’ve built apps for XtremIO, Isilon  and VNX and expect to have many more in the works.  These apps make it simple to ingest data from Dell EMC platforms, and we offer useful, pre-built reports and dashboards to make monitoring these assets simple.  And it doesn’t stop there…once the data is extracted from your Dell EMC platforms, the underlying searches powering our reports or just the indexes themselves can be used in investigations across the entire IT service stack.  One of my favorite things to hear from our customers is the exciting ways they use the apps beyond just simple reporting and I hope to hear many more stories this year at .conf2016.

Dell EMC Splunk Ninjas And Our Top Ten List

 

splunk_ninja_resizeThe Dell EMC Splunk Ninja team are at the show in their Dell EMC blue Ninja Hunt shirts. The Dell EMC Splunk Ninja team is a group of more than 40 systems engineers from across Dell EMC who have been trained the same way Splunk trains its own systems engineers.  The Ninjas hold certifications ranging from SE1 all the way to SE3, we’ve got skills across not only using Splunk, but administering and architecting it at scale.  This is a global team not only available to talk to you at .conf, but also available in the field to have direct conversations with you when you head back to the office.

 

 

Our wise, passionate and zany Ninja team recently pulled together a list of Top Ten Best Practices for Splunk on Dell EMC. This list has been amassed based on years of lab testing and real world customer experience. You may say ‘Duh’ to some, but others may surprise you.

Happy Splunking!

Analyst firm IDC evaluates EMC Isilon: Lab-validation of scale-out NAS file storage for your enterprise Data Lake

Suresh Sathyamurthy

Sr. Director, Product Marketing & Communications at EMC

A Data Lake should now be a part of every big data workflow in your enterprise organization. By consolidating file storage for multiple workloads onto a single shared platform based on scale-out NAS, you can reduce costs and complexity in your IT environment, and make your big data efficient, agile and scalable.

That’s the expert opinion in analyst firm IDC’s recent Lab Validation Brief: “EMC Isilon Scale-Out Data Lake Foundation: Essential Capabilities for Building Big Data Infrastructure”, March 2016. As the lab validation report concludes: “IDC believes that EMC Isilon is indeed an easy-to-operate, highly scalable and efficient Enterprise Data Lake Platform.

The Data Lake Maximizes Information Value

The Data Lake model of storage represents a paradigm shift from the traditional linear enterprise data flow model. As data and the insights gleaned from it increase in value, enterprise-wide consolidated storage is transformed into a hub around which the ingestion and consumption systems work. This enables enterprises to bring analytics to data in-place – and avoid expensive costs of multiple storage systems, and time for repeated ingestion and analysis.

But pouring all your data into a single shared Data Lake would put serious strain on traditional storage systems – even without the added challenges of data growth. That’s where the virtually limitless scalability of EMC Isilon scale-out NAS file storage makes all the difference…

The EMC Data Lake Difference

The EMC Isilon Scale-out Data Lake is an Enterprise Data Lake Platform (EDLP) based on Isilon scale-out NAS file storage and the OneFS distributed file system.

As well as meeting the growing storage needs of your modern datacenter with massive capacity, it enables big data accessibility using traditional and next-generation access methods – helping you manage data growth and gain business value through analytics. You can also enjoy seamless replication of data from the enterprise edge to your core datacenter, and tier inactive data to a public or private cloud.

We recently reached out to analyst firm IDC to lab-test our Isilon Data Lake solutions – here’s what they found in 4 key areas…

  1. Multi-Protocol Data Ingest Capabilities and Performance

Isilon is an ideal platform for enterprise-wide data storage, and provides a powerful centralized storage repository for analytics. With the multi-protocol capabilities of OneFS, you can ingest data via NFS, SMB and HDFS. This makes the Isilon Data Lake an ideal and user-friendly platform for big data workflows, where you need to ingest data quickly and reliably via protocols most suited to the workloads generating the information. Using native protocols enables in-place analytics, without the need for data migration, helping your business gain more rapid data insights.

datalake_blog

IDC validated that the Isilon Data Lake offers excellent read and write performance for Hadoop clusters accessing HDFS via OneFS, compared against via direct-attached storage (DAS). In the lab tests, Isilon performed:

  • nearly 3x faster for data writes
  • over 1.5x faster for reads and read/writes.

As IDC says in its validation: “An Enterprise Data Lake platform should provide vastly improved Hadoop workload performance over a standard DAS configuration.”

  1. High Availability and Resilience

Policy-based high availability capabilities are needed for enterprise adoption of Data Lakes. The Isilon Data Lake is able to cope with multiple simultaneous component failures without interruption of service. If a drive or other component fails, it only has to recover the specific affected data (rather than recovering the entire volume).

IDC validated that a disk failure on a single Isilon node has no noticeable performance impact on the cluster. Replacing a failed drive is a seamless process and requires little administrative effort. (This is in contrast to traditional DAS, where the process of replacing a drive can be rather involved and time consuming.)

Isilon can even cope easily with node-level failures. IDC validated that a single-node failure has no noticeable performance impact on the Isilon cluster. Furthermore, the operation of removing a node from the cluster, or adding a node to the cluster, is a seamless process.

  1. Multi-tenant Data Security and Compliance

Strong multi-tenant data security and compliance features are essential for an enterprise-grade Data Lake. Access zones are a crucial part of the multi-tenancy capabilities of the Isilon OneFS. In tests, IDC found that Isilon provides no-crossover isolation between Hadoop instances for multi-tenancy.

Another core component of secure multi-tenancy is the ability to provide a secure authentication and authorization mechanism for local and directory-based users and groups. IDC validated that the Isilon Data Lake provides multiple federated authentication and authorization schemes. User-level permissions are preserved across protocols, including NFS, SMB and HDFS.

Federated security is an essential attribute of an Enterprise Data Lake Platform, with the ability to maintain confidentiality and integrity of data irrespective of the protocols used. For this reason, another key security feature of the OneFS platform is SmartLock – specifically designed for deploying secure and compliant (SEC Rule 17a-4) Enterprise Data Lake Platforms.

In tests, IDC found that Isilon enables a federated security fabric for the Data Lake, with enterprise-grade governance, regulatory and compliance (GRC) features.

  1. Simplified Operations and Automated Storage Tiering

The Storage Pools feature of Isilon OneFS allows administrators to apply common file policies across the cluster locally – and extend them to the cloud.

Storage Pools consists of three components:

  • SmartPools: Data tiering within the cluster – essential for moving data between performance-optimized and capacity-optimized cluster nodes.
  • CloudPools: Data tiering between the cluster and the cloud – essential for implementing a hybrid cloud, and placing archive data on a low-cost cloud tier.
  • File Pool Policies: Policy engine for data management locally and externally – essential for automating data movement within the cluster and the cloud.

As IDC confirmed in testing, Isilon’s federated data tiering enables IT administrators to optimize their infrastructure by automating data placement onto the right storage tiers.

The expert verdict on the Isilon Data Lake

IDC concludes that: “EMC Isilon possesses the necessary attributes such as multi-protocol access, availability and security to provide the foundations to build an enterprise-grade Big Data Lake for most big data Hadoop workloads.”

Read the full IDC Lab Validation Brief for yourself: “EMC Isilon Scale-Out Data Lake Foundation: Essential Capabilities for Building Big Data Infrastructure”, March 2016.

Learn more about building your Data Lake with EMC Isilon.

Taking AIMS at the IP Transition

Ryan Sayre

Ryan Sayre is the CTO-at-Large for EMC Isilon covering Europe, Middle East, and Africa. Ryan has prior hands-on experience in production technology across several types of production workflows. Previously before working in the storage industry, he was an IT infrastructure architect at a large animation studio in the United States. He has consulted across entertainment sectors from content generation and production management to digital distribution and presentation. Ryan’s current role allows him to assist in enhancing the Isilon product for both current and future uses in media production, share his findings across similar industries and improve the overall landscape of how storage can be better leveraged for productivity. He holds an MBA from London Business School (UK), Bachelors of Science in Computer Science from University of Portland (USA). In his free time, he is an infrastructure volunteer for the London Hackspace and an amateur radio enthusiast using the callsigns M0RYS and N0RYS.

Latest posts by Ryan Sayre (see all)

Much has been made of the impending move to a largely IP and hybrid cloud infrastructure in the media and entertainment industry and with good reason. Over the last decade the shift from SDI to IP has been met with both cheers and jeers. Supporters of transitioning to IP speak of vast operating and financial benefits, while traditional broadcast facilities and operators are still struggling to reconcile these potential gains with their unease over emerging standards and interoperability concerns.

In an effort to assuage these concerns, EMC alongside several of the industry’s leading vendors such as Cisco, Evertz, Imagine Communications and Sony have joined the Alliance for IP Media Solutions (AIMS). AIMS, a non-profit trade alliance, is focused on helping broadcast and media companies move from bespoke legacy systems to a virtualized, IP-based future – quickly and economically. Believing open, standards-based protocol as critically important to ensuring long-term interoperability, AIMS’ promotes the adoption of several standards: VSF TR-03 and TR-04, SMPTE 2022-6 and AES67.

It is important that organizations continue to advocate for AIMS’ roadmap for open standards in IP technology and do their part to educate each other, which is why we recently partnered with TV Technology and Broadcast Television and internet production technology conceptEngineering to develop an e-book titled “The IP Transformation: What It Means for M&E Storage Strategies”. It examines how the combination of standard Ethernet/IP networking, virtualized workflows on commodity servers and clustered high-performance storage is influencing new video facility design and expanding new business opportunities for media companies. The e-book takes a closer look on topics such as media exchange characteristics, the eventual fate of Fibre Channel, Quality of Service (QoS) and storage needs for evolving media workflows.

To learn more about the shift to IP, visit EMC at IBC 2016 at stand 7.H10, September 9-13. The media and entertainment experts will be onsite exhibiting an array of new products and media workflow solutions that include 4K content creation, IP-based hybrid-cloud broadcast operations, and cloud DVR on-demand content delivery. EMC will also be demonstrating a number of partner solutions at IBC, including:

Pixspan, Aspera and NVIDIA

Advances in full resolution 4K workflows – EMC, Pixspan, Aspera, and NVIDIA are bringing full resolution 4K workflows to IT infrastructures, advancing digital media workflows with bit exact content over standard 10 GbE networks. Solution Overview

Imagine Communications

Integrated channel playout solution with Imagine Communications – EMC and Imagine Communications bring live channel playout with the Versio solution in an integrated offering with EMC’s converged VCE Vblock system and EMC’s Isilon scale-out NAS storage system. Solution Overview

MXFserver

Remote and collaborative editing solution with MXFserver –EMC and MXFsever are announcing an integrated disk-based archiving solution that allows immediate online retrieval of media files. The combined solution utilizes MXFserver software and EMC’s Isilon scale-out NAS to deliver storage as well as a platform for industry-leading editing applications. Solution Overview

Anevia

Cloud-based multi-platform content delivery with Anevia – The joint release from Anevia and EMC allows media organizations to deliver OTT Services: Live, Timeshift, Replay, Catchup, Start over, Pause, CDVR, and VOD to provide content to all devices, enabling consumers to access and view content they have recorded on any device at any time. Solution Overview

Rohde & Schwarz

EMC and Rohde & Schwarz announce an interoperability with Isilon storage and the Venice Ingest and Production platform. Venice is a real-time and file-based ingest and playout server from Rohde & Schwarz. Solution Overview

NLTek

EMC and NLTek bring a combined solution enabling integration with Avid Interplay. Working within the familiar Avid MC|UX toolset, users are able to store and restore Avid Assets to an EMC Isilon or ECS media repository—creating a unified Nearchive. Solution Overview

For more information and to schedule a meeting at IBC, please visit our website.

 

 

This summer, NBC captured history while setting standards for the future

Tom "TV" Burns

CTO, Media & Entertainment at EMC

This summer, NBC captured history while setting standards for the future.

Building on its history covering the Olympic Games, NBC provided viewers in the United States a front row seat to the Games of the XXXI Olympiad.

Projects such as covering the Games, a 17-day live concurrent event, require the ultimate in scalable, reliable storage. NBC uses the EMC Isilon product line to store and stage video captured during these irreplaceable moments of sporting glory, as well as audio, stills and motion graphics.

Isilon’s 3 Petabyte storage repository bridges the gap from Stamford to Rio, where it functioned as a single large Data Lake, enabling real-time global collaborative production supporting the entire broadcast. Adding Isilon nodes without downtime allows the addition of storage capacity and network throughput while maintaining seamless access to a rock solid platform.

NBC selected the EMC Isilon product line as a reliable, proven infrastructure, to manage their storage.

 

 

Is that a tier in your eye – or is your EDA storage obsolete?

Lawrence Vivolo

Sr. Business Development Manager at EMC²

We’ve all come to expect to have our data from our corporate laptop or workstation: e-mails, schedules, papers, music & videos, etc. backed-up automatically. Some less often accessed data, like archived e-mail, aren’t kept locally blue digital binary data on computer screen. Close-up shallow DOFto save disk space. When you access these files, you find that it’s slower to open. If the archive is very old, say a year or more, then you might even have to ask IT to “restore” from tape before you can open it. In the storage world, this process of moving data between different types of storage is called data tiering and is done to optimize performance and cost. Since ASIC/SoC design is all about turnaround time, time-to-market and shrinking budgets, it’s important to know how tiering impacts your EDA tool flow and what you can do to influence it.

In most enterprises there are multiple levels of tiering, where each offers different capacity/performance/cost ratios. The highest performance tier is reserved typically for the most critical applications because it is the most expensive, and with the least storage density. This tier, typically referred to as Tier “0”, is complemented by progressively lower performance, higher density (and lower cost) tiers (1, 2, 3, etc.). Tiers are generally made using different types of drives. For example, a storage cluster might include Tier 0 storage made using very high performance, low capacity solid-state drives (SSDs); Tier 1 storage made of high-capacity, high-performance Serial-attached SCSI (SAS) drives, and Tier 2 storage consisting of high-capacity Serial-ATA (SATA) drives.

While ideally all EDA projects would be run on Tier 0 storage (if space is available), it is highly desirable to move to lower cost tiers whenever possible to conserve budget.  Often this is done after a project has gone into production and design teams have moved on to the next project. This isn’t always the case, however, especially if tiering is managed manually. (Surprisingly, many semiconductor design companies today have deployed enterprise storage solutions that don’t support automated tiering).

Given the complexities and tight schedules involved in today’s semiconductor designs, it is not uncommon to find and fix a bug only a few weeks away from tape out. When this happens, sometimes you need to urgently allocate Tier-0 storage space in order to run last-minute regressions. If Tier-0 space is being managed manually and space is limited, you may have to wait for IT to move a different project’s data around before they can get to you.  From a management perspective, this is even more painful when it’s your old data, because you’ve been paying a premium to store it there unnecessarily!

The opposite scenario is also common: a project that’s already in production has had its data moved to lower cost storage to save budget. Later a critical problem is discovered that needs to be debugged.  In this scenario, do you try to run your EDA tools using the slower storage or wait for IT to move your data to Tier-0 storage and benefit from reduced simulation turn-around times?  It depends on how long it takes to make the transition. If someone else’s project data needs to be moved first, the whole process becomes longer and less predictable.

Isilon_Image_resizedWhile it may seem difficult to believe that tiering is managed manually, the truth is that most EDA tool flows today are using storage platforms that don’t support automated tiering. That could be due, at least in part, to their “scale-up” architecture which tends to create “storage silos” where each volume (or tier) of data is managed individually (and manually). Solutions such as EMC Isilon use a more modern “scale-out” architecture that lends itself better to support auto-tiering. Isilon, for example, features SmartPools which can seamlessly auto-tier EDA data – minimizing EDA turnaround time when you needed it and reducing cost when you don’t.

For EDA teams facing uncertain budgets and shrinking schedules, the benefits of automated tiering can be signification. With Isilon, for example, you can configure your project, in advance, to be allocated the fastest storage tier during simulation regressions (when you need performance), and then at some point after tape out (ex: 6 months), your project data will move it to a lower cost, less performance-critical tier. Eventually, while you’re sitting on a beach enjoying your production bonus, Isilon will move your data to an even lower tier for long-term storage – saving your team even more money. And if later, after the Rum has worn off,  you decide to review your RTL – maybe for reuse on a future project – Isilon will move that data to a faster tier, leaving the rest available at any time, but on lower cost storage. So next time you get your quarterly storage bill from IT, you should ask yourself “what’s lurking behind that compute farm – and does it support auto-tiering?”

MLBAM Goes Over the Top: The Case for a DIY Approach to OTT

James Corrigan

Advisory Solutions Architect at EMC

Latest posts by James Corrigan (see all)

smart tvWhen looking at the current media landscape, the definition of what constitutes a “broadcaster” is undergoing a serious overhaul. Traditional linear TV might not be dead just yet, but it’s clearly having to reinvent itself in order to stay competitive amid rapidly evolving media business models and increasingly diverse content distribution platforms.

The concept of “binge watching” a TV show, for example, was non-existent only a few years ago. Media consumption towards digital and online viewership on a myriad of devices such as smartphones, tablets and PCs is on the rise. Subscription on-demand services are becoming the consumption method of choice, while broadcast-yourself platforms like Twitch and YouTube are fast becoming a popular corner stone of millennial’s viewership habits. Horowitz Research found that over 70 percent of millennials have access to an OTT SVOD service, and they are three times as likely to have an OTT SVOD service without a pay TV subscription. PricewaterhouseCoopers (PwC) estimates that OTT video streaming will grow to be a $10.1 billion business by 2018, up from $3.3 billion in 2013.

As a result, broadcast operators are evolving into media aggregators, and content providers are transforming into “entertainment service providers,” expanding into platforms ranging from mobile to digital to even virtual theme parks.

Building Versus Buying:

This change in media consumption requires media organizations to consider a more efficient storage compute and network infrastructure. Media organizations need flexible and agile platforms to not only expand their content libraries but also to meet the dynamic growth in the number of subscribers and how they consume and experience media and entertainment.

To successfully compete in OTT market is dependent upon the “uniqueness” of your service to the consumer , This uniqueness comes from either having unique or exclusive content, or by having a platform which is able to adapt and offer the customer more than just watching content. For the latter how you deploy your solution whether it be (1) build your own (“DIY”), (2) buy a turn-key solution or (3) take a hybrid approach, is key to success.

MLBAM Hits a Home Run with a DIY Approach

A key advantage of the “DIY” approach is that it increases business agility, allowing media organizations to adapt and change, as consumers demand more from their services. For some media organizations this  allows them to leverage existing content assets, infrastructure and technology teams and keep deployment costs low. Further, layering OTT video delivery on top of regular playout enables organizations to incrementally add the new workflow to the existing content delivery ecosystem. For new entrants,  the DIY approach enables  new development methodologies, allowing these “new kids on the block” to develop micro-services unencumbered by legacy services.

One example of an organization taking the DIY approach is Isilon customer Major League Baseball Advanced Media (MLBAM), which has created a streaming media empire. MLBAM’s success underscores the voracious and rapid growth in consumer demand for streaming video; it streams sporting events, and also supports the streaming service HBO GO, as well as mobile, web and TV offerings for the NHL.

“The reality is that now we’re in a situation where digital distribution isn’t just a ‘nice to have’ strategy, it’s an essential strategy for any content company,” said Joe Inzerillo, CTO for MLBAM. “When I think about…how we’re going to be able to innovate, I often tell people ‘I don’t manage technology, I actually manage velocity.’ The ability to adapt and innovate and move forward is absolutely essential.”

Alternatively, the turn-key approach, which either outsources your media platform or gives you a pre-built video delivery infrastructure, can offer benefits such as increased speed-to-market. However, selecting the right outsource partner for this approach is critical; you choose incorrectly and it can create vendor lock-in, loss of control and flexibility and larger operational costs.

Making it Personal: Analytics’ Role

3D smart tv with hand holding remote control isolatedBeing able to access content when and where consumer’s want – on the device they want – is one part of the challenge with the rise of digital and online content. Another key component is personalization of that content to viewers. Making content more relevant and tailored for subscribers is critical to the success of alternate broadcast business models – EMC and Pivotal are helping media companies extract insights on customers through the development and use of analytics should be key to any OTT strategy. Analyzing data on what consumers are watching should be used to help drive content acquisition and personalized recommendation engines. The added benefits of personalized advertisement of content through targeted ad insertion will help increase revenue through tailored advertisements.

Scaling for the future

Infrastructure platforms that scale is the final consideration for the new age media platforms. Being able to scale “apps” based on containers or virtual instances is key. To do that you need a platform that scales compute, network and storage independently or together, just like EMC’s scale out NAS with Isilon or scale out compute with VCE or VXRail/Rack. MLBAM’s Inzerillo explains. “The ability to have a technology like Isilon that’s flexible, so that the size of the data lake can grow as we on board clients, is increasingly important to us. That kind of flexibility allows you to really focus on total cost of ownership of the custodianship of the data.”

Inzerillo continues, “If you’re always worried about the sand that you’re standing on, because it’s shifting, you’re never going to be able to jump, and  what we need to be able to do is sprint.”

It’s an exciting time to be in the ever-evolving media and entertainment space – the breadth of offerings that broadcasters and media companies are developing today, and the range of devices and distribution models to reach subscribers will only continue to grow.

Check out how MLBAM improves customer experience through OTT.

Categories

Archives

Connect with us on Twitter