Breakfast with ECS: Files Can’t Live in the Cloud? This Myth is BUSTED!

Welcome to another edition of Breakfast with ECS, a series where we take a look at issues related to cloud storage and ECS (Elastic Cloud Storage), EMC’s cloud-scale storage platform.

The trends towards increasing digitization of content and towards cloud based storage have been driving a rapid increase in the use of object storage throughout the IT industry.  However, while it may seem that all applications are using Web-accessible REST interfaces on top of cloud based object storage, in reality, while new applications are largely being designed with this model, file based access models remain critical for a large proportion of the existing IT workflows.

Given the shift in the IT industry towards object based storage, why is file access still important?  There are several reasons for this, but they boil down to two fundamental reasons:

  1. There exists a wealth of applications, both commercial and home-grown, that rely on file access, as it has been the dominant access paradigm for the past decade.
  2. It is not cost effective to update all of these applications and their workflows to use an object protocol. The data set managed by the application may not benefit from an object storage platform, or the file access semantics may be so deeply embedded in the application that the application would need a near rewrite to disentangle it from the file protocols.

What are the options?

The easiest option is to use a file-system protocol with an application that was designed with file access as its access paradigm.

ECS - Beauty FL_resizedECS has supported file access natively since its inception, originally via its HDFS access method, and most recently via the NFS access method.  While HDFS lacks certain features of true file system interfaces, the NFS access method has full support for applications and NFS clients are a standard part of any OS platform, thus making NFS the logical choice for file based application access.

Via NFS, applications gain access to the many benefits of ECS, including its scale-out performance, the ability to massively multi-thread reads and writes, the industry leading storage efficiencies, and the ability to support multi-protocol access, e.g. ingesting data from a legacy application via NFS while also supporting data access over S3 for newer, mobile application clients and thus supporting next generation workloads at a fraction of the cost of rearchitecting the complete application.

Read the NFS on ECS Overview and Performance White Paper for a high level summary of version 3 of NFS with ECS.

An alternative is to use a gateway or tiering solution to provide file access, such as CIFS-ECS, Isilon CloudPools, or third-party products like Panzura or Seven10.  However, if ECS supports direct file-system access, why would an external gateway ever be useful?  There are several reasons why this might make sense:

  • An external solution will typically support a broader range of protocols, including things like CIFS, NFSv4, FTP, or other protocols that may be needed in the application environment.
  • The application may be running in an environment where the access to the ECS is over a slow WAN link. A gateway will typically cache files locally, thereby shielding the applications from WAN limitations or outages while preserving the storage benefits of ECS.
  • A gateway may implement features like compression, thereby either reducing WAN traffic to the ECS, thus providing direct cost savings on WAN transfer fees, or encryption, thus providing an additional level of security for the data transfers.
  • While HTTP ports are typically open across corporate or data center firewalls, network ports for NAS (NFS, CIFS) protocols are normally blocked for external traffic. Some environments, therefore, may not allow direct file access to an ECS which is not in the local data center, though a gateway which provides file services locally and accesses ECS over HTTP would satisfy the corporate network policies.

So what’s the right answer?

The there is no one right answer; instead, the correct answer will depend on the specifics of the environment and of the characteristics of the application.

  • How close is the application to the ECS? File system protocols work well over LANs and less well over WANs.  For applications that are near the ECS, a gateway is an unnecessary additional hop on the data path, though 3d Kugel mit Fragezeichen im Labyrinthgateways can give an application the experience of LAN local traffic even for a remote ECS.
  • What are the application characteristics? For an application that makes many small changes to an individual file or a small set of files, a gateway can consolidate multiple such changes into a single write to ECS.  For applications that more generally write new files or update existing files with relatively large updates (e.g. rewriting a PowerPoint presentation), a gateway may not provide much benefit.
  • What is the future of the application? If the desire is to change the application architecture to a more modern paradigm, then files on ECS written via the file interface will continue to be accessible later as the application code is changed to use S3 or Swift.  Gateways, on the other hand, often write data to ECS in a proprietary format, thereby making the transition to direct ECS access via REST protocols more difficult.

As should be clear, there is no one right answer for all applications.  The flexibility of ECS, however, allows for some applications to use direct NFS access to ECS while other applications use a gateway, based on the characteristics of the individual applications.

If existing file based workflows were the reason for not investigating the benefits of an ECS object based solution, then rest assured that an ECS solution can address your file storage needs while still providing the many benefits of the industry’s premier object storage platform.

Want more ECS? Visit us at www.emc.com/ecs or try the latest version of ECS for FREE for non-production use by visiting www.emc.com/getecs.

Is that a tier in your eye – or is your EDA storage obsolete?

Lawrence Vivolo

Sr. Business Development Manager at EMC²

We’ve all come to expect to have our data from our corporate laptop or workstation: e-mails, schedules, papers, music & videos, etc. backed-up automatically. Some less often accessed data, like archived e-mail, aren’t kept locally blue digital binary data on computer screen. Close-up shallow DOFto save disk space. When you access these files, you find that it’s slower to open. If the archive is very old, say a year or more, then you might even have to ask IT to “restore” from tape before you can open it. In the storage world, this process of moving data between different types of storage is called data tiering and is done to optimize performance and cost. Since ASIC/SoC design is all about turnaround time, time-to-market and shrinking budgets, it’s important to know how tiering impacts your EDA tool flow and what you can do to influence it.

In most enterprises there are multiple levels of tiering, where each offers different capacity/performance/cost ratios. The highest performance tier is reserved typically for the most critical applications because it is the most expensive, and with the least storage density. This tier, typically referred to as Tier “0”, is complemented by progressively lower performance, higher density (and lower cost) tiers (1, 2, 3, etc.). Tiers are generally made using different types of drives. For example, a storage cluster might include Tier 0 storage made using very high performance, low capacity solid-state drives (SSDs); Tier 1 storage made of high-capacity, high-performance Serial-attached SCSI (SAS) drives, and Tier 2 storage consisting of high-capacity Serial-ATA (SATA) drives.

While ideally all EDA projects would be run on Tier 0 storage (if space is available), it is highly desirable to move to lower cost tiers whenever possible to conserve budget.  Often this is done after a project has gone into production and design teams have moved on to the next project. This isn’t always the case, however, especially if tiering is managed manually. (Surprisingly, many semiconductor design companies today have deployed enterprise storage solutions that don’t support automated tiering).

Given the complexities and tight schedules involved in today’s semiconductor designs, it is not uncommon to find and fix a bug only a few weeks away from tape out. When this happens, sometimes you need to urgently allocate Tier-0 storage space in order to run last-minute regressions. If Tier-0 space is being managed manually and space is limited, you may have to wait for IT to move a different project’s data around before they can get to you.  From a management perspective, this is even more painful when it’s your old data, because you’ve been paying a premium to store it there unnecessarily!

The opposite scenario is also common: a project that’s already in production has had its data moved to lower cost storage to save budget. Later a critical problem is discovered that needs to be debugged.  In this scenario, do you try to run your EDA tools using the slower storage or wait for IT to move your data to Tier-0 storage and benefit from reduced simulation turn-around times?  It depends on how long it takes to make the transition. If someone else’s project data needs to be moved first, the whole process becomes longer and less predictable.

Isilon_Image_resizedWhile it may seem difficult to believe that tiering is managed manually, the truth is that most EDA tool flows today are using storage platforms that don’t support automated tiering. That could be due, at least in part, to their “scale-up” architecture which tends to create “storage silos” where each volume (or tier) of data is managed individually (and manually). Solutions such as EMC Isilon use a more modern “scale-out” architecture that lends itself better to support auto-tiering. Isilon, for example, features SmartPools which can seamlessly auto-tier EDA data – minimizing EDA turnaround time when you needed it and reducing cost when you don’t.

For EDA teams facing uncertain budgets and shrinking schedules, the benefits of automated tiering can be signification. With Isilon, for example, you can configure your project, in advance, to be allocated the fastest storage tier during simulation regressions (when you need performance), and then at some point after tape out (ex: 6 months), your project data will move it to a lower cost, less performance-critical tier. Eventually, while you’re sitting on a beach enjoying your production bonus, Isilon will move your data to an even lower tier for long-term storage – saving your team even more money. And if later, after the Rum has worn off,  you decide to review your RTL – maybe for reuse on a future project – Isilon will move that data to a faster tier, leaving the rest available at any time, but on lower cost storage. So next time you get your quarterly storage bill from IT, you should ask yourself “what’s lurking behind that compute farm – and does it support auto-tiering?”

Digital Strategies:  Are Analytics Disrupting the World?

Keith Manthey

CTO of Analytics at EMC Emerging Technologies Division

Close up of woman hand pointing at business document during discussion at meeting“Software is eating the world”.  It is a phrase that we often see written, but sometimes do not fully understand.  More recently I read derivations of that phrase that posits that “analytics are disrupting the world”.  Both phrases have a lot of truth.  But why? Some of the major disruptions in the last 5 years can be attributed to analytics.  Most companies that serve as an intermediary, such as Uber or AirBNB, with a business model of making consumer and supplier “connections” are driven by analytics.  Pricing surges, routing optimizations, available rentals, available drivers, etc. are all algorithms to these “connection” businesses that are disrupting the world.  It could be argued that analytics is their secret weapon.

It is normal for startups to try new and sometimes crazy & risky investments into new technologies like Hadoop and analytics.  The trend is carrying over into traditional industries and established businesses as well.  What are the analytics uses cases in industries like Financial Services (aka FSI)?

Established Analytics Plays in FSI

Two use cases naturally come to my mind when I think of “Analytics” and “Financial Services”; High Frequency Trading and Fraud are two traditional use cases that have long utilized analytics.  Both are fairly well respected and written about with regard to their heavy use of analytics.  I myself blogged recently (From Kinetic to Synthetic) on behalf of Equifax regarding the market trends in Synthetic Fraud.  Beyond these obvious trends though, where are analytics impacting the Financial Services industry?  What use cases are relevant and impacting the industry in 2016 and why?

Telematics

The insurance industry has been experimenting with opt-in programs that monitor driving behavior for several years.  Insurance companies have varying opinions of its usefulness, but it’s clear that driving behavior is (1) a heavy use of unstructured data and (2) a dramatic leap from the statistical based approach using financial data, actuarial tables, and statistics.  Telematics is the name given to a set of opt-in programs around usage-based insurance / driver monitoring programs. Telematics use in insurance companies has fostered a belief that has long been used in other verticals like fraud that pins behavior down to an individual pattern instead of trying to predict broad swaths of patterns.  To be more precise, Telematics is looking to derive a “behavior of one” vs a “generalized driving pattern for 1K individuals”.  As to the change of why this is different from past insurance practices, we will draw a specific comparison between the two. Method One – historical actuarial tables of life expectancy along with demographic and financial data to denote risk vs. Method Two – how does ONE individual drive based upon real driving data as received from their car.  Which might be more predictive about the expected rate of accidents is the question for analytics.  While this is a gross over-simplification of the entire process, it is a radical shift of the types of data and the analytical methods of deriving value from the data available to the industry.  Truly transformational.

Labor Arbitrage

The insurance industry has been experimenting with analytics based on past performance data.  The industry has years of predictive information (i.e., claim reviews along with actual outcomes) based on past claims.  By exploring this past performance data, Insurance companies are able to apply logistical regression algorithms to derive weighted scores.  The derived scores are then being analyzed to determine a path forward.  For example, if scores greater then 50 amounted to claims that are evaluated and then almost always paid by the insurer, then all scores above 50 should be immediately approved and paid.  The inverse is also true that treatments can be quickly rejected as they are often not appealed or regularly turned down under review if appealed. The analytics of the actual present case was compared against previous outcomes of the corpus of past performance data to derive the most likely outcome of the case.  The resulting business effect would be that the workforce that reviewed medical claims would only be given those files that needed to be worked.  The result would be a better work force productivity.  Labor Arbitrage with data and analytics being the disruptor of workforce trends.

Know Your Customer

Retail Banking has turned to analytics as they have focused on attracting and retaining their customers.   After a large trend of acquisitions in the last decade, retail banks are working to integrate their various portfolios.  In Business people shaking hands, finishing up a meetingsome cases, resolving down the identity of all their clients on all their accounts isn’t always as straight forward as it sounds.  This is especially hard with dormant accounts that might have maiden names, mangled data attributes, or old addresses.  The ultimate goal of co-locating all their customer data into an analytics environment is a customer 360.  Customer 360 is mainly focused on gaining full insights around a customer.  This can lead to upsell opportunities by understanding what a customer’s peer set and what products a similar demographic has a strong interest in. For example, if individuals of a given demographic typically subscribe to 3 of a company’s 5 products, an individual matching that demographic should be targeted for upsell on those additional products when they only subscribe to 1 product.  This is using large swathes of data and companies own product adoptions to build upsell and marketing strategies for their own customers.  If someone was a small business owner and personal consumer of the retail bank, the company may not have previously tied those accounts together.  It gives the bank a whole new perspective on who its customer base really is.

Wrap Up

Why are these trends interesting?  In most of these cases above, people are familiar with certain portions of the story.  The underlying why or what, might often get missed.  It is important to not only understand the technology and capabilities involved with transformation, but also the underlying shift that is being caused. EMC has a long history of helping customers through those journeys and we look forward to helping even more clients face them.

 

 

 

 

MLBAM Goes Over the Top: The Case for a DIY Approach to OTT

James Corrigan

Advisory Solutions Architect at EMC

Latest posts by James Corrigan (see all)

smart tvWhen looking at the current media landscape, the definition of what constitutes a “broadcaster” is undergoing a serious overhaul. Traditional linear TV might not be dead just yet, but it’s clearly having to reinvent itself in order to stay competitive amid rapidly evolving media business models and increasingly diverse content distribution platforms.

The concept of “binge watching” a TV show, for example, was non-existent only a few years ago. Media consumption towards digital and online viewership on a myriad of devices such as smartphones, tablets and PCs is on the rise. Subscription on-demand services are becoming the consumption method of choice, while broadcast-yourself platforms like Twitch and YouTube are fast becoming a popular corner stone of millennial’s viewership habits. Horowitz Research found that over 70 percent of millennials have access to an OTT SVOD service, and they are three times as likely to have an OTT SVOD service without a pay TV subscription. PricewaterhouseCoopers (PwC) estimates that OTT video streaming will grow to be a $10.1 billion business by 2018, up from $3.3 billion in 2013.

As a result, broadcast operators are evolving into media aggregators, and content providers are transforming into “entertainment service providers,” expanding into platforms ranging from mobile to digital to even virtual theme parks.

Building Versus Buying:

This change in media consumption requires media organizations to consider a more efficient storage compute and network infrastructure. Media organizations need flexible and agile platforms to not only expand their content libraries but also to meet the dynamic growth in the number of subscribers and how they consume and experience media and entertainment.

To successfully compete in OTT market is dependent upon the “uniqueness” of your service to the consumer , This uniqueness comes from either having unique or exclusive content, or by having a platform which is able to adapt and offer the customer more than just watching content. For the latter how you deploy your solution whether it be (1) build your own (“DIY”), (2) buy a turn-key solution or (3) take a hybrid approach, is key to success.

MLBAM Hits a Home Run with a DIY Approach

A key advantage of the “DIY” approach is that it increases business agility, allowing media organizations to adapt and change, as consumers demand more from their services. For some media organizations this  allows them to leverage existing content assets, infrastructure and technology teams and keep deployment costs low. Further, layering OTT video delivery on top of regular playout enables organizations to incrementally add the new workflow to the existing content delivery ecosystem. For new entrants,  the DIY approach enables  new development methodologies, allowing these “new kids on the block” to develop micro-services unencumbered by legacy services.

One example of an organization taking the DIY approach is Isilon customer Major League Baseball Advanced Media (MLBAM), which has created a streaming media empire. MLBAM’s success underscores the voracious and rapid growth in consumer demand for streaming video; it streams sporting events, and also supports the streaming service HBO GO, as well as mobile, web and TV offerings for the NHL.

“The reality is that now we’re in a situation where digital distribution isn’t just a ‘nice to have’ strategy, it’s an essential strategy for any content company,” said Joe Inzerillo, CTO for MLBAM. “When I think about…how we’re going to be able to innovate, I often tell people ‘I don’t manage technology, I actually manage velocity.’ The ability to adapt and innovate and move forward is absolutely essential.”

Alternatively, the turn-key approach, which either outsources your media platform or gives you a pre-built video delivery infrastructure, can offer benefits such as increased speed-to-market. However, selecting the right outsource partner for this approach is critical; you choose incorrectly and it can create vendor lock-in, loss of control and flexibility and larger operational costs.

Making it Personal: Analytics’ Role

3D smart tv with hand holding remote control isolatedBeing able to access content when and where consumer’s want – on the device they want – is one part of the challenge with the rise of digital and online content. Another key component is personalization of that content to viewers. Making content more relevant and tailored for subscribers is critical to the success of alternate broadcast business models – EMC and Pivotal are helping media companies extract insights on customers through the development and use of analytics should be key to any OTT strategy. Analyzing data on what consumers are watching should be used to help drive content acquisition and personalized recommendation engines. The added benefits of personalized advertisement of content through targeted ad insertion will help increase revenue through tailored advertisements.

Scaling for the future

Infrastructure platforms that scale is the final consideration for the new age media platforms. Being able to scale “apps” based on containers or virtual instances is key. To do that you need a platform that scales compute, network and storage independently or together, just like EMC’s scale out NAS with Isilon or scale out compute with VCE or VXRail/Rack. MLBAM’s Inzerillo explains. “The ability to have a technology like Isilon that’s flexible, so that the size of the data lake can grow as we on board clients, is increasingly important to us. That kind of flexibility allows you to really focus on total cost of ownership of the custodianship of the data.”

Inzerillo continues, “If you’re always worried about the sand that you’re standing on, because it’s shifting, you’re never going to be able to jump, and  what we need to be able to do is sprint.”

It’s an exciting time to be in the ever-evolving media and entertainment space – the breadth of offerings that broadcasters and media companies are developing today, and the range of devices and distribution models to reach subscribers will only continue to grow.

Infrastructure Convergence Takes Off at Melbourne Airport

Yasir Yousuff

Sr. Director, Global Geo Marketing at EMC Emerging Technologies Division

By air, by land, or by sea? Which do you reckon is the most demanding means of travel these days? In asking so, I’d like to steer your thoughts to the institutions and businesses that provide transportation in these myriad segments.

Melbourne Airport_resizedHands down, my pick would be aviation; out of which the heaviest burden falls on any international airport operating 24/7. Let’s take Melbourne Airport in Australia for example. In a typical year, some 32 million passengers transit through its doors – almost a third more than Australia’s entire population. If you think that’s a lot; that figure looks set to double to 64 million by 2033.

As the threat of terrorism grows, so will the criteria for stringent checks. And as travelers get more affluent, so will their expectations. Put the two together, you get somewhat of a paradoxical dilemma that needs to be addressed.

So how does Australia’s only major 24/7 airport cope with these present and future demands?

First Class Security Challenges

Beginning with security, airports have come to terms with the fact that sole passport checks in the immigration process isn’t sufficient. Thanks to Hollywood movies and their depictions of how easy it is to get hold of “fake” passports – think Jason Bourne but in the context of a “bad” guy out to harm innocents, a large majority of the public within the age of reasoning would have to agree that more detailed levels of screening are a necessity.

“Some of the things we need to look at are new technologies associated with biometrics, new methods of running through our security and our protocols. Biometrics will require significant compute power and significant storage ability,” says Paul Bunker, Melbourne Airport’s Business Systems & ICT Executive.

With biometrics, Bunker is referring to breakthroughs such as fingerprint and facial recognition. While these data dense technologies are typically developed in silos, airports like the Melbourne Airport need them to function coherently as part of its integrated security ecosystem and processed in near real-time to ensure authorities have ample time to respond to threats.

First Class Service Challenges

Then there are the all-important passengers who travel in and out for a plethora of reasons: some for business, some for leisure, and some on transit to other destinations.

Whichever the case, most, if not all of them, expect a seamless experience. In this regard, it means free from the hassles of waiting for long periods to clear immigration, picking up luggage at belts almost immediately after, and the list goes on.

With the airport’s IT systems increasingly strained in managing these operational outcomes, a more sustainable way forward is inevitable.

First Class Transformative Strategy

Melbourne Airport has historically been more reactive and focused heavily on maintenance but that has changed in recent times. Terminal 4, which opened in August 2015, became the airport’s first terminal to embrace digital innovation, boasting Asia-Pacific’s first end-to-end self-service model from check-in kiosks to automated bag drop facilities.

This comes against the backdrop of a new charter that aims to enable IT to take on a more strategic role and drive greater business value through technology platforms.

“We wanted to create a new terminal that was effectively as much as possible a fully automated terminal where each passenger had more control over the environment,” Bunker explained. “Technical challenges associated with storing massive amounts of data generated not only by our core systems but particularly by our CCTV and access control solutions is a major problem we had.”

First Class Solution

In response, Melbourne Airport implemented two VCE Vblock System 340 with a VNX5600 converged infrastructure solution featuring 250 virtual servers and 2.5 petabytes of storage capacity. Two EMC Isilon NL series clusters were further deployed at two sites for production and disaster recovery.

Business People Rushing Walking Plane Travel Concept

The new converged infrastructure has allowed Melbourne Airport to simplify its IT operations by great leaps, creating a comfortable buffer that is able to support future growth as the business matures. It has also guaranteed high availability on key applications like baggage handling and check-in, crucial in the development of Terminal 4 as a fully automated self-service terminal.

While key decision-makers may have a rational gauge on where technological trends are headed, it is far from 100%. These sweeping reforms have effectively laid the foundations to enable flexibility in adopting new technologies across the board – biometrics for security and analytics for customer experience enhancement – whenever the need calls for it. Furthermore, the airport can now do away with separate IT vendors to reduce management complexity.

Yet all these come pale in comparison to the long-term collaborative working relationship Melbourne Airport has forged with EMC to support its bid to become an industry-leading innovation driver of the future.

Read the Melbourne Airport Case Study to learn more.

 

Digital Health Strategies – An introduction to Elastic Cloud Storage (ECS)

Nathan Bott

Healthcare Solutions Architect at EMC

This past April, my father reached two important milestones – he turned 70 and retired from a 40-plus year career in food science.  He is now planning to head back to Spain to complete the Camino de Santiago – or the Way of St. James – a journey he started in 2014.  Unfortunately he had to stop 150 miles into the 500 mile trek because of severe back and hip pain due to the emergence of degenerative disc disease.  After working with his physician to manage this new condition, he started to prepare for the upcoming trip by walking between 5 and 10 miles three times a week.  Along with this training came other ailments that would be expected with anybody his age:  pulled muscles, strained knees, and “light-headedness.”  This last ailment can be attributed to another condition he happens to have – Type 2 Diabetes.  And so it goes, as he gets older and tries to maintain a high level of activity, he will suffer more ailments, and spend more time and money (via Medicare benefits) managing these chronic conditions.

And he will not be alone.  My father was born in 1946 and is thus a first year baby-boomer, the first wave of new Medicare beneficiaries in which about 10,000 enroll every day.  The Congressional Budget Office expects over 80 million Americans will be Medicare eligible by 2035, an almost 50% increase in enrollment from 2015.  The cost per beneficiary is expected to increase even more as each patient will have multiple chronic conditions to manage; per the National Council on Aging:

  • About 68% of Medicare beneficiaries have two or more chronic diseases and 36% have four or more.
  • More than two-thirds of all health care costs are for treating chronic diseases.

The US government and the healthcare industry are well aware of the current “silver tsunami” and planning has been underway.

For the past 7 years, since the passage of the Hi-Tech provision in the American Recovery and Reinvestment Act (ARRA) in 2009 and the Medicare Shared Saving Program (MSSP) in 2011 the ground work has been laid to implement various programs and incentives to distribute the efforts to manage the cost of delivering healthcare to an ever expanding beneficiary population.  The prolific adoption of electronic health records technology by healthcare providers and the reorganization of reimbursements to these providers – from a fee-for-service to an outcomes based model – have combined to become a catalyst for a digital revolution in healthcare.

Government led healthcare reform programs like Accountable Care Organizations (ACO), the Patient Centered Medical Home, and the Precision Medicine initiative are predicated with having a digital technology platform that can use the demographic, financial, clinical and genetic data acquired from the vast population of patients to develop evidence-based plans of care that are specifically tailored based on the genetic disposition and the disease(s) of a given patient.

Medicine doctor hand working with modern medical iconsRegardless of the industry, product or service, a disruptive technology that drives innovation through digitization requires a re-assessment of the infrastructure that supports it; the healthcare industry is no different.  As healthcare providers have implemented electronic medical records systems, deployed enterprise imaging solutions, piloted next generation sequencing programs, and developed clinical informatics capabilities, new infrastructure requirements and operating modes have emerged.  Furthermore, in response to the evolving markets and reimbursement models explained above, many healthcare entities – providers, payers, and pharmaceuticals alike – have consolidated through mergers and acquisitions which also necessitate re-evaluating infrastructure architectures in order to rationalize operational capabilities, drive utilization efficiency and decrease both operational and capital costs.

Working directly with healthcare customers, collaborating with healthcare software vendors, and partnering with IT service providers, EMC has been on the front line to provide architectural guidance and infrastructure solutions to support this digital revolution and its emerging infrastructure requirements. A key infrastructure solution to support the digitization revolution in healthcare is a highly durable, geo-distributed, performant storage platform that will work with legacy monolithic systems using file system interfaces as well as cloud-native distributed applications using standard storage APIs like AWS S3 or OpenStack Swift.

ECSEMC’s Elastic Cloud Storage system (ECS) is a modern object storage platform that does just that…and more.  Just as important, the ECS object platform can be used for a myriad of use cases specifically for the healthcare industry to support:

  • Innovative technology platforms which enable coordinated and accessible medical services such as outlined by the Patient-Centered Medical Home program
  • Collaboration and data sharing as needed for programs such as the Accountable Care Organization initiative
  • An increase in IT operational agility using a storage platform that can be provisioned with cloud-based API’s
  • A decrease in costs through storage utilization efficiency at scale using modern data protection and replication methods

In my follow-up blog entries here, I will provide more details on the functional capabilities of ECS as well as map these capabilities to specific use cases that are driving the digital revolution to take on the challenges of delivering collaborative and personalized healthcare services to an aging population with multiple complex chronic conditions while driving down IT operational costs as well as the overall cost of the healthcare system.

Examples of the use cases I mentioned above include various new technology trends like the emerging Internet of Things (IoT) solutions that support remote patient monitoring, telehealth, and behavior modification tools to help manage chronic diseases; data lake functionality with the Hadoop ecosystem for population and precision health based analytics programs; and cloud-native development efforts to launch distributed mobile applications that can capture and access data from any location.

I look forward to exploring these use cases and examining how ECS’s unique capabilities will help our healthcare customers move towards meeting their technical, operational, and “digitized-mission” goals.

Follow EMC

Categories

Archives

Connect with us on Twitter