Author Archive

Is that a tier in your eye – or is your EDA storage obsolete?

Lawrence Vivolo

Sr. Business Development Manager at EMC²

We’ve all come to expect to have our data from our corporate laptop or workstation: e-mails, schedules, papers, music & videos, etc. backed-up automatically. Some less often accessed data, like archived e-mail, aren’t kept locally blue digital binary data on computer screen. Close-up shallow DOFto save disk space. When you access these files, you find that it’s slower to open. If the archive is very old, say a year or more, then you might even have to ask IT to “restore” from tape before you can open it. In the storage world, this process of moving data between different types of storage is called data tiering and is done to optimize performance and cost. Since ASIC/SoC design is all about turnaround time, time-to-market and shrinking budgets, it’s important to know how tiering impacts your EDA tool flow and what you can do to influence it.

In most enterprises there are multiple levels of tiering, where each offers different capacity/performance/cost ratios. The highest performance tier is reserved typically for the most critical applications because it is the most expensive, and with the least storage density. This tier, typically referred to as Tier “0”, is complemented by progressively lower performance, higher density (and lower cost) tiers (1, 2, 3, etc.). Tiers are generally made using different types of drives. For example, a storage cluster might include Tier 0 storage made using very high performance, low capacity solid-state drives (SSDs); Tier 1 storage made of high-capacity, high-performance Serial-attached SCSI (SAS) drives, and Tier 2 storage consisting of high-capacity Serial-ATA (SATA) drives.

While ideally all EDA projects would be run on Tier 0 storage (if space is available), it is highly desirable to move to lower cost tiers whenever possible to conserve budget.  Often this is done after a project has gone into production and design teams have moved on to the next project. This isn’t always the case, however, especially if tiering is managed manually. (Surprisingly, many semiconductor design companies today have deployed enterprise storage solutions that don’t support automated tiering).

Given the complexities and tight schedules involved in today’s semiconductor designs, it is not uncommon to find and fix a bug only a few weeks away from tape out. When this happens, sometimes you need to urgently allocate Tier-0 storage space in order to run last-minute regressions. If Tier-0 space is being managed manually and space is limited, you may have to wait for IT to move a different project’s data around before they can get to you.  From a management perspective, this is even more painful when it’s your old data, because you’ve been paying a premium to store it there unnecessarily!

The opposite scenario is also common: a project that’s already in production has had its data moved to lower cost storage to save budget. Later a critical problem is discovered that needs to be debugged.  In this scenario, do you try to run your EDA tools using the slower storage or wait for IT to move your data to Tier-0 storage and benefit from reduced simulation turn-around times?  It depends on how long it takes to make the transition. If someone else’s project data needs to be moved first, the whole process becomes longer and less predictable.

Isilon_Image_resizedWhile it may seem difficult to believe that tiering is managed manually, the truth is that most EDA tool flows today are using storage platforms that don’t support automated tiering. That could be due, at least in part, to their “scale-up” architecture which tends to create “storage silos” where each volume (or tier) of data is managed individually (and manually). Solutions such as EMC Isilon use a more modern “scale-out” architecture that lends itself better to support auto-tiering. Isilon, for example, features SmartPools which can seamlessly auto-tier EDA data – minimizing EDA turnaround time when you needed it and reducing cost when you don’t.

For EDA teams facing uncertain budgets and shrinking schedules, the benefits of automated tiering can be signification. With Isilon, for example, you can configure your project, in advance, to be allocated the fastest storage tier during simulation regressions (when you need performance), and then at some point after tape out (ex: 6 months), your project data will move it to a lower cost, less performance-critical tier. Eventually, while you’re sitting on a beach enjoying your production bonus, Isilon will move your data to an even lower tier for long-term storage – saving your team even more money. And if later, after the Rum has worn off,  you decide to review your RTL – maybe for reuse on a future project – Isilon will move that data to a faster tier, leaving the rest available at any time, but on lower cost storage. So next time you get your quarterly storage bill from IT, you should ask yourself “what’s lurking behind that compute farm – and does it support auto-tiering?”

Cloud Computing and EDA – Are we there yet?

Lawrence Vivolo

Sr. Business Development Manager at EMC²

Cloud 9Today anything associated with “Cloud” is all the rage.  In fact, depending on your cellular service provider, you’re probably already using cloud storage to back up your e-mail, pictures, texts, etc. on your cell phone. (I realized this when I got spammed with “you’re out of cloud space – time to buy more” messages). Major companies that offer cloud-based solutions (servers, storage, infrastructure, applications, management, etc.) include Microsoft, Google, Amazon, Rackspace, Dropbox, EMC and others. For those that don’t know the subtleties of Cloud, and the terms, like Public vs Private vs Hybrid vs Funnel, and why some are better suited for EDA, I thought I’d give you some highlights.

Let’s start with the obvious – what is “Cloud”? Cloud computing is a collection of resources which can include servers (for computing), storage, applications, infrastructure (ex: networking) and even services (management, backups, etc.). Public clouds are simply clouds that are made available by 3rd-parties and are shared resources. Being shared is often advertised as a key advantage of public cloud – because the resources are shared, so is the cost. These shared resources can also expand and contract as needs change, allowing companies to precisely balance need with availability.  Back in 2011, Synopsys, a leading EDA company, was promoting this as a means to address peak EDA resource demand [1].

Unfortunately, public cloud has some drawbacks.  The predictability of storage cost is one. Though public cloud appears very affordable at first glance, most providers charge for the movement of data to and from their cloud, which can exceed the actual costs to store the data.  This can be further compounded when data is needed worldwide as it may need to be copied to multiple regions for performance and redundancy purposes. With semiconductor design, these charges can be significant, since many EDA programs generate lots of data.

Perhaps the greatest drawback to EDA adoption of public cloud is the realization that your data might be sitting on physical compute and/or storage resources that are being shared with someone else’s data.  That doesn’t mean you can see other’s data. Access is restricted via OS policy and other security measures. Yet that does create a potential path for unauthorized access. As a result, most semiconductor companies have not been willing to risk the potential to have their most important “golden jewels” (their IP) hacked and stolen from a public cloud environment. Security has improved since 2011, however, and some companies are considering cloud for long-term archiving of non-critical data as well as some less business critical IP.

Private cloud avoids these drawbacks, as it isolates the physical infrastructure – including hardware, storage and networking – from all other users. Your own company’s on-premise hardware is typically a private cloud, even though, increasingly, some of that “walled-off” infrastructure is itself located off-premise and/or owned and managed by a 3rd party. While physical and network isolation reduce the security concerns, they also eliminates some of the flexibility. The number of servers available can’t be increased or decreased with a single key-click to accommodate peak demand changes, at least not without upfront planning and additional costs.

Hybrid cloud is another common term – which simply means a combination of public and private clouds.

In the world of semiconductor design, private cloud as a service has been available for some time and is offered in various forms by several EDA companies today. Cadence® Design Systems, for example, offers both Hosted Design Solutions [2], which includes HW, SW and IT infrastructure, and QuickCycles® Service which offers on-site or remote access to Palladium emulation and simulation acceleration resources [3]. Hybrid cloud is also starting to gain interest, where non-critical data that’s infrequently accessed can be stored with minimal transport costs.

The public cloud market is changing constantly and as time progresses new improvements may arise that make it more appealing to EDA. A challenge of IT administrators today is meeting today’s growing infrastructure needs while avoiding investments that are incompatible with future cloud migrations. This is where you need to hedge your bets and chose a platform that delivers the performance and flexibility EDA companies require, yet enables easy migration from private to hybrid—or even public cloud. EMC’s Isilon, for example, is an EDA-proven high performance network-attached storage platform that provides native connectivity to the most popular public cloud providers, including Amazon Web Services, Microsoft Azure and EMC’s Virtustream.

Not only does native cloud support future-proof today’s storage investment, it makes the migration seamless – thanks to its single point of management that encompasses private, hybrid and public cloud deployments. EMC Isilon supports a feature called CloudPools, which transparently extends an Isilon storage pool into cloud infrastructures. With CloudPools your company’s critical data can remain on-premise yet less critical, rarely accessed data can be encrypted securely and archived automatically and transparently onto the cloud. Isilon can also be configured to archive your business-critical data (IP) to lower-cost on-premise media.  This combination saves budget and keeps more high-performance storage space available locally for your critical EDA jobs.

Semiconductor companies and EDA vendors have had their eyes on public cloud for many years. While significant concerns over security continue to slow adoption, technology continues to evolve. Whether your company ultimately sticks with private cloud, or migrates seamlessly to hybrid or public cloud in the future depends on decisions you make today. The key is to focus on flexibility, and not let fear cloud your judgment.

[1] EDA in the Clouds: Myth Busting: https://www.synopsys.com/Company/Publications/SynopsysInsight/Pages/Art6-Clouds-IssQ2-11.aspx?cmp=Insight-I2-2011-Art6

[2] Cadence Design Systems Hosted Design Services: http://www.cadence.com/services/hds/Pages/Default.aspx

[3] Cadence Design System QuickCycles Service: http://www.cadence.com/products/sd/quickcycles/pages/default.aspx

EDA, Storage, and Gilligan’s Island

Lawrence Vivolo

Sr. Business Development Manager at EMC²

When I was a kid I used to watch Gilligan’s Island® a lot. For those of you who never had the pleasure, it was a show about a “3 hour tour” by boat that stranded passengers on a small desert isle, including the first mate, Gilligan.LakshadweepIsland

Turns out, this would be good training for managing an EDA project.  Why?  Well, managing an EDA project typically starts out straightforward; You allocate computers and storage to the project based on estimates, leverage your experience from other projects and use it to predict how many cores you will need to run simulation regressions (how many directory trees you’ll want for the ASIC, and how much space each will require). And then, months later, the storm hits. You realize the need to add a new test area and to allocate space for it. “No worries,” you think, “I’ve been paying for that space and I know it’s available.”

And that’s when you realize…you are stranded on an island and can’t get what you need. If only you had Gilligan to save the day.

(more…)

Engineering Design Automation: Uncovering the Secrets Behind Performance – or Lack Thereof

Lawrence Vivolo

Sr. Business Development Manager at EMC²

Having years of experience as both a design engineer and more recently, an EDA hardware/software product marketing manager, I’ve seen EDA from both sides of the performance battle. A particularly fond memory was one Saturday when, as a young design engineer, I was on the phone chewing out a timing analysis vendor over poor run time – and later learned it was their VP of R&D. Oops.

Years later, I was on the other side of the fence – working for one of the big-3 EDA vendors.  Once again, I was in a performance battle – arguing that a requirement to double EDA tool performance after 9 months of work could not be met by simply installing the tool on a server that now runs 2x faster.  Yes – that really was R&D’s defense.  I was real confident I’d win this argument….I didn’t.

Some things never change. Comparing today to the past 25 years, design complexity continues to grow out of control and schedules are shrinking. And while the EDA vendors have really done a great job boosting performance while adding new features, the truth is that some tasks just demand more performance. Unfortunately, waiting for a next-generation CPU that doubles performance is not realistic, and even throwing more cores at the problem has started to run out of steam.  Sometimes taking a step back and looking at the big picture is a good thing, and that’s what I learned shortly after arriving at EMC.

BlogTo my surprise, I learned that even at many of the top semiconductor companies, there were untapped opportunities to boost throughput performance. And I’m not talking about the 20% that we all dream of today – I’m talking about 100% or more. For example, next time you launch a batch of a couple hundred simulation jobs onto your compute farm, take a look at CPU utilization? I’ll bet it’s nowhere near 100%.  In fact it’s highly likely to be a lot less than you ever imagined. If that’s the case, you could be in luck.  The performance problem probably isn’t the CPU, nor is it the EDA tool.  It’s the infrastructure behind the server farm – with storage being the most likely bottleneck. And while this was a surprise at first, it really makes sense. As designs grow larger, and the EDA tool flow grows more sophisticated, the burden on storage, in terms of numbers of files, directories, transient data creation has also grown out of control. And not surprisingly, a storage platform that used to work great may no longer be sufficient for today’s EDA flow.

 

Today we can no longer rely solely on processor speed advancements to keep up with the growing demands on EDA tools, and neither can the EDA tool vendors. We need to look at the big picture and look for the hidden infrastructure choke points.  Networking, storage, policies, geography – even security, all play a role.   Want to see your EDA tool performance double overnight? That’s impossible – or maybe not. You’d be surprised what magic can be done by someone that understands the substiles of EDA tool flows and their interaction with storage infrastructure. No surprise – That’s EMC.

Categories

Archives

Connect with us on Twitter