Posts Tagged ‘data management’

Why Healthcare IT Should Abandon Data Storage Islands and Take the Plunge into Data Lakes

One of the most significant technology-related challenges in the modern era is managing data growth. As healthcare organizations leverage new data-generating technology, and as medical record retention requirements evolve, the exponential rise in data (already growing at 48 percent each year according to the Dell EMC Digital Universe Study) could span decades.

Let’s start by first examining the factors contributing to the healthcare data deluge:

  • Longer legal retention times for medical records – in some cases up to the lifetime of the patient.
  • Digitization of healthcare and new digitized diagnostics workflows such as digital pathology, clinical next-generation sequencing, digital breast tomosynthesis, surgical documentation and sleep study videos.
  • With more digital images to store and manage, there is also an increased need for bigger picture archive and communication system (PACS) or vendor-neutral archive (VNA) deployments.
  • Finally, more people are having these digitized medical tests, (especially given the large aging population) resulting in a higher number of yearly studies with increased data sizes.

Healthcare organizations also face frequent and complex storage migrations, rising operational costs, storage inefficiencies, limited scalability, increasing management complexity and storage tiering issues caused by storage silo sprawl.

Another challenge is the growing demand to understand and utilize unstructured clinical data. To mine this data, a storage infrastructure is necessary that supports the in-place analytics required for better patient insights and the evolution of healthcare that enables precision medicine.

Isolated Islands Aren’t Always Idyllic When It Comes to Data

The way that healthcare IT has approached data storage infrastructure historically hasn’t been ideal to begin with, and it certainly doesn’t set up healthcare organizations for success in the future.

Traditionally, when adding new digital diagnostic tools, healthcare organizations provided a dedicated storage infrastructure for each application or diagnostic discipline. For example, to deal with the growing storage requirements of digitized X-rays, an organization will create a new storage system solely for the radiology department. As a result, isolated storage siloes, or data islands, must be individually managed, making processes and infrastructure complicated and expensive to operate and scale.

Isolated siloes further undermine IT goals by increasing the cost of data management and compounding the complexity of performing analytics, which may require multiple copies of large amounts of data copied into another dedicated storage infrastructure that can’t be shared with other workflows. Even the process of creating these silos is involved and expensive because tech refreshes require migrating medical data to new storage. Each migration, typically performed every three to five years, is labor-intensive and complicated. Frequent migrations not only strain resources, but take IT staff away from projects aimed at modernizing the organization, improving patient care and increasing revenue.

Further, silos make it difficult for healthcare providers to search data and analyze information, preventing them from gaining the insights they need for better patient care. Healthcare providers are also looking to tap potentially important medical data from Internet-connected medical devices or personal technologies such as wireless activity trackers. If healthcare organizations are to remain successful in a highly regulated and increasingly competitive, consolidated and patient-centered market, they need a simplified, scalable data management strategy.

Simplify and Consolidate Healthcare Data Management with Data Lakes

The key to modern healthcare data management is to employ a strategy that simplifies storage infrastructure and storage management and supports multiple current and future workflows simultaneously. A Dell EMC healthcare data lake, for example, leverages scale-out storage to house data for clinical and non-clinical workloads across departmental boundaries. Such healthcare data lakes reduce the number of storage silos a hospital uses and eliminate the need for data migrations. This type of storage scales on the fly without downtime, addressing IT scalability and performance issues and providing native file and next-generation access methods.

Healthcare data lake storage can also:

  • Eliminate storage inefficiencies and reduce costs by automatically moving data that can be archived to denser, more cost-effective storage tiers.
  • Allow healthcare IT to expand into private, hybrid or public clouds, enabling IT to leverage cloud economies by creating storage pools for object storage.
  • Offer long-term data retention without the security risks and giving up data sovereignty of the public cloud; the same cloud expansion can be utilized for next-generation use cases such as healthcare IoT.
  • Enable precision medicine and better patient insights by fostering advanced analytics across all unstructured data, such as digitized pathology, radiology, cardiology and genomics data.
  • Reduce data management costs and complexities through automation, and scale capacity and performance on demand without downtime.
  • Eliminate storage migration projects.

 

The greatest technical challenge facing today’s healthcare organizations is the ability to effectively leverage and manage data. However, by employing a healthcare data management strategy that replaces siloed storage with a Dell EMC healthcare data lake, healthcare organizations will be better prepared to meet the requirements of today’s and tomorrow’s next-generation infrastructure and usher in advanced analytics and new storage access methods.

 

Get your fill of news, resources and videos on the Dell EMC Emerging Technologies Healthcare Resource Page

 

 

Metalnx: Making iRODS Easy

Stephen Worth

Stephen Worth is a director of Global Innovation Operations at Dell EMC. He manages development and university research projects in Brazil, is a technical liaison between helping to improve innovation across our global engineering labs, and works in digital asset management leveraging user defined metadata. Steve is based out of Dell EMC’s RTP Software Development Center which focuses on data protection, core storage products, & cloud storage virtualization. Steve started with Data General in 1985, which was acquired by EMC in 1999, and Dell Technologies in 2016. He has led many product development efforts involving operating systems, diagnostics, UI, database, & applications porting. His background includes vendor & program management, performance engineering, engineering services, manufacturing, and test engineering. Steve, an alumnus of North Carolina Status University, received a B.S. degree in Chemistry in 1981 and M.S. degree in Computer Studies in 1985. He served as an adjunct faculty member of the Computer Science department from 1987-1999. Currently Steve is an emeritus member of the Computer Science Department’s Strategic Advisory Board and is currently chairperson of the Technical Advisory Board for the James B. Hunt Jr. Library on Centennial Campus.

Latest posts by Stephen Worth (see all)

DNA background

Advances in sequencing, spectroscopy, and microscopy are driving life sciences organizations to produce vast amounts of data. Most organizations are dedicating significant resources to the storage and management of that data. However, until recently, their primary efforts have focused on how to host the data for high performance, rapid analysis, and moving it to more economical disks for longer-term storage.

The nature of life sciences work demands better data organization. The data produced by today’s next-generation lab equipment is rich in information, making it of interest to different research groups and individuals at varying points in time. Examples include:

  • Raw experimental and analyzed data may be needed as new drug candidates move through research and development, clinical trials, FDA approval, and production
  • A team interested in new indications for an existing chemical compound would want to leverage work already done by others in the organization on the compound in the past
  • In the realm of personalized medicine, clinicians may need to evaluate not only a person’s health history, but correlate that information with genome sequences and phenotype data throughout the individual’s life.

The great challenge is how to make data more generally available and useful throughout an organization. Researchers need to know what data exists and have a way to access it. For this to happen, data must be properly categorized, searchable, and easy to find.

To get help in this area, many research organizations and government agencies worldwide are using the Integrated Rule-Oriented Data System (iRODS), which is open source data management software developed by the iRODS Consortium. iRODS enables data discovery using a data/metadata catalog that can retain machine and user-defined metadata describing every file, collection, and object in a data grid.

Additionally, iRODS automates data workflows with a rule engine that permits any action to be initiated by any trigger on any server or client in the grid. iRODS enables secure collaboration, so users only need to login to their home grid to access data hosted on a remote, federated grid.

Leveraging iRODS can be simplified and its benefits enhanced when used with Metalnx, an administrative and metadata management user interface (UI) for iRODS. Metalnx was developed by Dell EMC through its efforts as a corporate member of the iRODS Consortium. The intuitive Metalnx UI helps both the IT administrators charged with managing metadata and the end-users / researchers who need to find and access relevant data based upon metadata descriptions.

Making use of metadata via an easy to use UI provided by Metalnx working with iRODS can help:

  • Maximize storage assets
  • Find what’s valuable, no matter where the data is located
  • Automate movement and processing of data
  • Securely share data with collaborators

Real world example: Putting the issues into perspective

A simple example illustrates why iRODS and Metalnx are needed. Plant & Food Research, a New Zealand-based science company providing research and development that adds value to fruit, vegetable, crop and food products, makes great use of next-generation sequencing and genotyping. The work generates a lot of mixed data types.

“In the past, we were good at storing data, but not good at categorizing the data or using metadata,” said Ben Warren, bioinformatician, at Plant & Food Research. “We tried to get ahead of this by looking at what other institutions were doing.”

iRODS seemed a good fit. It was the only decent open source solution available. However, there were some limitations. “We were okay with the rule engine, but not the interface,” said Warren.

A system administrator working with EMC on hardware for the organization’s compute cluster had heard of Metalnx and mentioned this to Warren. “We were impressed off the bat with its ease of use,” said Warren. “Not only would it be useful for bioinformaticians, coders, and statisticians, but also for the scientists.”

The reason: Metalnx makes it easier to categorize the organization’s data, to control the metadata used to categorize the data, and to use the metadata to find and access any data.

Benefits abound

At Plant & Food Research, metadata is an essential element of a scientist’s workflow. The metadata makes it easier to find data at any stage of a research project. When a project is conceived, scientists will start by determining all metadata required for the project using Metalnx and cataloging data using iRODS. With this approach, everything associated with a project including the samples used, sample descriptions, experimental design, NGS data, and other information are searchable.

One immediate benefit is that someone undertaking a new project can quickly determine if similar work has already been done. This is increasingly important in life science organizations as research become more multi-discipline in nature.

Furthermore, the more an organization knows about its data, the more valuable the data becomes. Researchers can connect with other work done across the organization. Being able to find the right raw data of a past effort means an experiment does not have to be redone. This saves time and resources.

Warren notes that there are other organizational benefits using iRODS and Metalnx. When it comes to collaborating with others, the data is simply easier to share. Scientists can put the data in any format and it is easier to publish the data.

Learn more

Metalnx is available as open source tool. It can be found at Dell EMC Code www.codedellemc.com  or on Github at www.github.com/Metalnx . EMC has also made binary versions available on bintray at www.bintray.com/metalnx  and a Docker image posted on Docker Hub at https://hub.docker.com/r/metalnx/metalnx-web/

A broader discussion of the use of Metalnx and iRODS in the life sciences can be found in an on-demand video of a recent web seminar “Expanding the Face of Meta Data in Next Generation Sequencing.” The video can be viewed on the EMC Emerging Tech Solutions site.

 

Get first access to our LifeScience Solutions

Telemedicine Part 1: TeleRadiology as the growth medium of Precision Medicine

Sanjay Joshi

CTO, Healthcare & Life-Sciences at EMC
Sanjay Joshi is the Isilon CTO of Healthcare and Life Sciences at the EMC Emerging Technologies Division. Based in Seattle, Sanjay's 28+ year career has spanned the entire gamut of life-sciences and healthcare from clinical and biotechnology research to healthcare informatics to medical devices. His current focus is a systems view of Healthcare, Genomics and Proteomics for infrastructures and informatics. Recent experience has included information and instrument systems in Electronic Medical Records; Proteomics and Flow Cytometry; FDA and HIPAA validations; Lab Information Management Systems (LIMS); Translational Genomics research and Imaging. Sanjay holds a patent in multi-dimensional flow cytometry analytics. He began his career developing and building X-Ray machines. Sanjay was the recipient of a National Institutes of Health (NIH) Small Business Innovation Research (SBIR) grant and has been a consultant or co-Principal-Investigator on several NIH grants. He is actively involved in non-profit biotech networking and educational organizations in the Seattle area and beyond. Sanjay holds a Master of Biomedical Engineering from the University of New South Wales, Sydney and a Bachelor of Instrumentation Technology from Bangalore University. He has completed several medical school and PhD level courses.

Real “health care” happens when telemedicine is closely joined to a connected-care delivery model that has prevention and continuity-of-care at its core. This model has been defined well, but only sparsely adopted. As John Hockenberry, host of the morning show “The Takeaway” on National Public Radio, eloquently puts it: “health is not episodic.” We need a continuous care system.

Telemedicine makes it possible for you to see a specialist like me without driving hundreds of miles

Image source: Chest. 2013,143 (2):295-295. doi:10.1378/chest.143.2.295

How do we get the “right care to the right patient at the right time”? Schleidgen et al define Precision Medicine also known as Personalized Medicine (1) as seeking “to improve stratification and timing of health care by utilizing biological information and biomarkers on the level of molecular disease pathways, genetics, proteomics as well as metabolomics.” Precision Medicine (2) is an orthogonal, multimodal view of the patient from her/his cells to pathways to organs to health and disease. There are several devices and transducers that would catalyze telemedicine: Radiology, Pathology, and Wearables. I will focus on Radiology for this part of my three-part series, since all of these modalities use multi-spectral imaging.

Where first?
The world is still mostly rural. According to World Bank statistics, 19% of the USA is rural, but the worldwide average is about 30% which is a spectrum from 0% rural (Hong Kong) to 74% rural (Afghanistan). With the recent consolidations (since 2010 in the US) of hospitals into larger organizations (3), it is this 30% to 70% of the world with sparse network connectivity that needs telemedicine sooner than the well-off “worried well” folks who live in dense urban areas with close access to healthcare. China has the world’s largest number of hospitals at around 60,000 followed by India at around 15,000. The US tally is approximately 5,700 hospitals. The counter-argument to the rural needs in the US is the risk of reduction of physician numbers (4), the growing numbers of the urban poor and the elderly. Then there is the plight of poor health amongst the world’s millions of refugees who are usually stuck in no-mans-lands, fleeing conflicts that never seem to wane. All these use-cases are valid, but need prioritization.

Connected Health and the “Saver App”
Many a fortune has been made by devising and selling “killer apps” on mobile platforms. In healthcare what we need is a “saver app.” Using the pyscho-social keys to the success of these “sticky” technologies, Dr. Joseph C. Kvedar succinctly builds the case for connected health in his recent book “The Internet of Healthy Things” with three strategies and three tactics:

Strategies: (1) Make It about Life; (2) Make It Personal; and (3) Reinforce Social Connections.

Tactics: (1) Employ Messaging; (2) Use Unpredictable Rewards; and (3) Use the Sentinel Effect.

Dr. Kvedar calls this “digital therapies.”

The Vendor Neutral Archive (VNA) and Virtual Radiology
The Western Roentgen Society, a predecessor of the Radiological Society of North America (RSNA), was founded in 1915 in St. Louis, Missouri (soon after the invention of the X-Ray tube in Bavaria in 1895). An interactive timeline of Radiology events can be seen here. Innovations in Radiology have always accelerated the innovations in healthcare.

The Radiology value chain is in its images and clinical reporting, as summarized in the diagram below (5):

Radiology value chain

To scale this value-chain for telemedicine, we need much larger adoption of VNA, which is an “Enterprise Class” data management system. A VNA consolidates multiple Imaging Departments into:

  • a master directory,
  • associated storage and
  • lifecycle management of data

The difference between PACS (Picture Archiving and Communications System) (6) and VNA is the Image Display and the Image Manager layers respectively.

The Image Display layer is a PACS Vendor or a Cloud based “image program”. All Admit, Discharge and Transfer (ADT) information must reside with the image. This means DICOM standards and HL7 X.12N interoperability (using service protocols like FHIR) are critical. The Image Manager for VNA is the “storage layer of images”, either local or cloud based. For telemedicine to be successful, VNA must “scale-out” exponentially and in a distributed manner within a privacy and security context.

VNA’s largest players (alphabetically) are: Agfa, CareStream, FujiFilm (TeraMedica), IBM (Merge), Perceptive Software (Acuo), Philips and Siemens. The merger of NightHawk Radiology with vRad which was then acquired by MedNax and IBM’s acquisition of Merge Healthcare (in Aug 2015) are important landmarks in this trend.

One of the most interesting journal articles in 2015 was on “Imaging Genomics” (or Radiomics) of glioblastoma, a brain cancer. By bidirectionally linking imaging features to the underlying molecular features, the authors (7) have created a new field of non-invasive genomic biomarkers.

Imagine this “virtual connected hive” of patients on one side and physicians, radiologists and pathologists on the other, constantly monitoring and improving the care of a population in health and disease at the individual and personal level. Telemedicine needs to be the anchor architecture for Precision Medicine. Without Telemedicine (and VNA), there is no Precision Medicine.

Postscript: Telepresence in mythology
Let me end this tale of distance and care with a little echo from my namesake, Sanjaya, who is mentioned in the first chapter of the first verse of the Bhagvad Gita (literally translated as the “Song of the Lord”) – an existential dialog between the warrior Arjuna and his charioteer, Krishna. The Gita, as it is commonly known, is set within the longest Big Data poem with over 100,000 verses (and 1.8 million words), the Mahabharata, estimated to be first written around 400 BCE.

Dhritarashtra, the blind king, starts this great book-within-book by enquiring: “O Sanjaya, what did my sons and the sons of Pandu decide about battle after assembling at the holy land of righteousness Kurukshetra?”

Sanjaya starts the Gita by peering into the great yonder. He is bestowed with the divine gift of seeing events afar (divya-drishti); he is the king’s tele-vision – and Dhritarashtra’s advisor and charioteer (just like Krishna in the Gita). The other great religions and mythologies also mention telepresence in their seminal books.

My tagline for the “trickle down” in technology innovation flow is “from Defense to Life Sciences to Pornography to Finance to Commerce to Healthcare.” One interpretation of the Mahabharata is that it did not have any gods – all miracles were added later. Perhaps we have now reached the pivot point for telepresence which has happened in war to “trickle down” into population scale healthcare without divine intervention or miracles!

References:

  1. Schleidgen et al, “What is personalized medicine: sharpening a vague term based on a systematic literature review”, BMC Medical Ethics, Dec 2013, 14:55
  2. “Toward Precision Medicine”, Natl. Acad. Press, June 2012
  3. McCue MJ, et al, “Hospital Acquisitions Before Healthcare Reform”, Journal of Healthcare Management, 2015 May-Jun; 60(3):186-203.
  4. Petterson SM, et al, “Estimating the residency expansion required to avoid projected primary care physician shortages by 2035”, Annals of Family Medicine 2015 Mar; 13(2):107-14. doi: 10.1370/afm.1760
  5. Enzmann DR, “Radiology’s Value Chain”, Radiology: Volume 263: Number 1, April 2012, pp 243-252
  6. Huang HK, “PACS and Imaging Informatics: Basic Principles and Applications”, Wiley-Blackwell; 2 edition (January 12, 2010)
  7. Moton S, et al, “Imaging genomics of glioblastoma: biology, biomarkers, and breakthroughs”, Topics in Magnetic Resonance Imaging. 2015

 

Get first access to our LifeScience Solutions

Improving Healthcare Data Management with EMC Isilon – Think holistic, not in separated storage islands

The Data Growth Challenge in Healthcare

According to the EMC Digital Universe with Research & Analysis by IDC Healthcare[1] data growth is one of the fastest across many industries. A 48% annual growth rate will lead to 2,314 Exabytes of data in 2020.

Data Growth

Source: EMC Digital Universe with Research & Analysis by IDC

The reasons for this data growth rate are many and include new healthcare applications and regulatory / compliance challenges and continued introductions of new technology and equipment that incorporate data-intensive next generation diagnostics.

The growing data sets will enable healthcare providers to make quicker information-driven decisions, increase efficiency, support remote diagnostics, and provide better collaboration.

For Electronic Health Record (EHR), additional unstructured data such as voice, video, and text are now being stored. New diagnostic and other healthcare applications are also growing with increasing use of medical images and studies with larger images sizes. Or, the deployment of clinical next generation sequencing will contribute to the 48% annual growth rate in healthcare data.

All these data must comply with country and state regulations including long retention periods. Those regulatory compliance requirements are an additional key data growth driver.

Another challenge of data growth is finding the right data at the right time. Big Data analytics enables healthcare providers to focus in on data most useful for diagnostics, treatment, and discovery.

The Data Management Challenge in Healthcare

Factors that are forcing healthcare organizations to rethink their storage strategies include:

More data must be stored: Storage capacity requirements continue to grow significantly with the shift to data-intensive healthcare. As the number of storage devices increases, so too does the need of IT resources to maintain the infrastructure.

Inefficiency in storage capacity: In most healthcare organizations, storage is typically deployed and managed by diagnostic functions or departments and capacity is not shared between modalities. This may lead to spare capacity for one modality while others need to be upgraded continuously. With a siloed approach to storage, extra capacity cannot be shared, thus increasing CAPEX and OPEX cost.

Changing data retention requirements: Patient records, digital diagnostic images, and clinical study results are now stored for longer periods of time. Some data can move into a ‘cold archive’ while other healthcare data needs to be immediately accessible online

Merger and acquisitions are increasing. According to the latest analysis by Kaufmann Hall & Associates, LLC, the number of hospital transactions announced in 2015 grew 18 percent compared to 2014. Healthcare organizations now own multiple hospitals, clinics, long term care facilities, and physician practices across a large geography. It isn’t practical in all cases to maintain a central data center at all places and it makes sense to have smaller regional data stores that can tier to a centralized “data hub”.

Is there a better solution?

Over the last years, we’ve made a tremendous investment into a scale-out Network Attached Storage solution. EMC Isilon easily scales with a push of a button and with the same pace of the data growth. EMC Isilon can seamlessly scale on demand, enabling healthcare organizations to add Petabyte of storage or expand performance “on the fly”. Every Isilon cluster is a single pool of shared storage eliminating the need to deploy a storage silo for each modality or department. Last year we introduced the concept of a “Data Lake”, enabling healthcare organizations consolidate their data into a central data repository. Through the multi-access methods supporting different healthcare applications, organizations can store, access, share and even analyze the data stored in one location without the need to copy the data from one storage silo or infrastructure into another. If needed, different access zones and data encryption ensures data security and data separation without compromising the “Data Lake” concept.

Very recently EMC announced the “Data Lake 2.0” with the introduction of OneFS 8.0 providing capabilities to expand the “Data Lake” from the “edge” to the “core” (the centralized data repository) to the “cloud”.

IsilonSD Edge is a software-defined storage solution running on commodity hardware in a VMware environment. IsilonSD Edge expands the “Data Lake” into remote locations or departments with smaller data storage requirements. This capability provides great efficiency and cost advantages in particular for larger healthcare organizations owning multiple geographical distributed healthcare facilities.

EMC Isilon CloudPools enables healthcare organizations to tier data off their central (“core”) Isilon cluster to either a private in-house cloud based on EMC Elastic Cloud Storage (ECS) or another Isilon cluster or into a public cloud for archiving as older patient related data may need to be stored for the life of the patient. This cost effective extension of the “Data Lake” provides encryption capabilities for security purposes and compression to minimize storage capacity requirements and bandwidth usage.

The extended “Data Lake” is managed through one management interface and regardless where the data is stored the records are immediately accessible.

Better Healthcare Data Management

The combination of EMC Isilon hardware, OneFS 8.0, IsilonSD Edge, and Isilon CloudPools delivers the capabilities needed to meet the growing data challenges facing healthcare organizations today and tomorrow. Our aim is to provide healthcare organization a storage investment protection and reduced OPEX & CAPEX by providing a solution that scales on demand in line with the data growth rate across the organization.

[1] http://www.emc.com/analyst-report/digital-universe-healthcare-vertical-report-ar.pdf

InsightIQ® 2.5: Driving Storage Performance Metrics to the Next Level

Flexible deployment + powerful data export tools = the perfect time to upgrade your customers

It is no secret that viewing and analyzing storage performance data can bring infinite value to your customers, which is why we took performance metrics to the next level with our latest release of EMC Isilon InsightIQ® 2.5, which now includes integration with third-party tools and flexible deployment options.

Today’s update is a direct result of feedback we received from our partners and customers who requested a simpler, more integrated product. They realize that EMC Isilon InsightIQ software is the perfect complement to EMC Isilon scale-out storage systems, so we added tools and options that will maximize the performance of their storage systems, simplify the process, and forecast their future needs. Your customers will reap great benefits, such as improved performance and deeper insight into their data analytics.

New data export tool

Previously, customers had to use other products in conjunction with InsightIQ to extract end-to-end performance data. With InsightIQ 2.5, there’s no need for another product, thanks to an included data export tool. This new tool allows users to access InsightIQ data in two ways:

  1. By using ad-hoc data extraction by simply clicking on “download” from the user interface and exporting it into in a spreadsheet.
  2. By using the command line interface, which enables customers to define the data to extract, and set up a regular schedule of data exports into their own analytics tool, such as Splunk.

Multiple deployment options

In addition, customers now have a choice on how to deploy InsightIQ. Previously shipped as a plug-and-play virtual appliance, customers simply launched it, entered a password, and connected to clusters. However, the underlying operating system, Linux Ubuntu, did not fit well into some customers’ data centers. While retaining the option to install InsightIQ as a virtual appliance, InsightIQ 2.5 also provides a new native installation option that allows customers to install InsightIQ on Red Hat Enterprise Linux or CentOS, and physical hardware. This not only improves performance, but also enables customers to run InsightIQ without configuring a virtualized environment.

Even easier upgrading

Previous upgrades to InsightIQ meant setting up a new virtual appliance, and then manually importing the configuration. But now, with InsightIQ 2.5, customers can simply apply the upgrade and retain all previous configurations—making upgrades non-disruptive.

As you can see, we listen to feedback that is given to us. If you’re an Isilon customer and have any questions or comments on InsightIQ, I encourage you to let me know.

Follow Dell EMC

Categories

Archives

Connect with us on Twitter