Current Problems and Future Trends

Imagine you're a little kid again, playing with Tinkertoys (or Lego bricks, Lincoln Logs, or Meccano pieces; pick your favorite). You have an eccentric uncle who believes your birthday comes once every 18 months. On each birthday, he gives you a gift of another complete set equal to what you already have. Your store of building blocks therefore doubles every 18 months. This is great fun; you can build bigger, better, and taller things all the time. Before long, you're making toy skyscrapers, and then entire cities.

Surprisingly soon, you run out ideas for things to build. Or, if you're very creative, you run out of time to build them before the next installment of parts is dumped in your lap. You simply can't snap pieces together fast enough to finish a really big project before it's time to start the next one. You have a few choices. You can ignore the new gift and steadfastly finish your current project, thus falling behind your more ambitious friends. Or you can start combining small pieces together into bigger pieces and use them as the building blocks.

So it is with modern chip design. The frequently heard (and often misquoted) Moore's Law provides us with a rich 58 percent compound annual interest rate on our transistor budgets. That's the same as doubling every 18 months. Given that it frequently takes more than 18 months to design a modern chip, chip designers are presented with a daunting challenge: Take the biggest chip the world has ever seen and design something that's twice that big. Do it using today's tools, skills, and personnel, knowing that nobody has ever done it before. When you're finished, rest assured your boss will ask you to do it again. Sisyphus had it easy.

The modern EDA process is running out of steam. Humans are not becoming noticeably more productive, but the tasks they are given (at least, those in the semiconductor engineering professions) get more difficult at a geometric rate. Either the tools have to change or the nature of the task has to be redefined. Otherwise, the Moore's Law treadmill we've come to enjoy for the past 30 years will start slowing down dramatically.

First, let's cover the easy stuff. Faster computers will help run the current EDA tools faster. Faster synthesis, simulation, modeling, and verification will all speed up chip design by a little bit. However, even if computers were to get 58 percent faster every year (they don't), that would only keep pace with the increasing complexity of the chips they're asked to design. At best, we'd be marking time. In reality, chip designers are falling behind.

A change to testing and verification would provide a small boost. The amount of time devoted to this unloved portion of chip design has grown faster than the other tasks. Testing now consumes more than half of some chips' project schedules. To cut this down to size, some companies have strict rules about how circuitry must be designed, in such a way that testing is simplified. Better still, these firms encourage their engineers to reuse as much as possible from previous designs, the assumption being that reused circuitry has already been validated. Unfortunately, reused circuitry has only been validated in a different chip; in new surroundings the same circuit might behave differently.

IP is another angle on the problem. Instead of helping designers design faster, this helps them design less. Reuse, renew, and recycle is the battle cry of many engineering managers. Using outside IP doesn't so much speed design, as it enables more ambitious designs. Engineers tend to design larger and more complex chips than they otherwise might have if they know they don't have to design every single piece of it themselves. Although that makes the resulting chips more powerful and feature-packed, it doesn't shorten the design cycle at all. It just packs more features into the same amount of time.

The most attention is devoted to new design tools. How can we make each individual engineer more productive? The great hope seems to be to make the chip-design tools more abstract or high level, submerging detail and focusing on the overall architecture of a large and complex chip. Many ideas are being tried, from free-form drawing tools that allow an engineer to sketch a rough outline of the design (“from napkin to chip” is one firm's advertising slogan) to procedural languages that an engineer can use to “write” a new chip.

There are two problems with these high-level approaches. For one, giving up detail-level control over a chip's design will necessarily produce sloppier designs that are slower and bulkier, wasting silicon area and power. These are not trends anyone looks forward to. On the other hand, the same Moore's Law that bedevils us also hides our sins: Silicon gets faster all the time, and that increased speed can hide the performance lost through the new tools. It might be a good bargain.

The second problem, and a thornier one to solve, is simple human inertia. People are loath to give up their familiar tools and abandon the professional skills they've developed. For an industry that thrives on the breakneck pace of innovation, ironically, semiconductor engineers drag their feet when it comes to changing and adapting their own profession. Hardware engineers criticize and malign the new software-into-hardware languages partly because using those languages would require skills the hardware engineers don't have and would marginalize the skills they do have. Nobody likes to anticipate the death of his or her own profession. Besides, solving a hardware problem by discarding all the hardware engineers seems a bit counterintuitive.

It's not that today's chip designers aren't trying to help solve the problem. After all, it is they who stand to benefit from whatever breakthrough comes. It's just that nobody knows quite which way to turn. On one hand, gradual evolutions of the current and familiar tools, such as Superlog, can leverage an engineer's skills and experience, but they're only a stopgap measure, buying a few years' grace at most. On the other hand, radically new tools like chip-level abstraction diagrams aren't familiar to anyone and take months or years to learn to use effectively. This would mean discarding years of hard-won experience doing it “the old way.”

Each engineer must make his or her own decision, or have it forced on him or her by a supervisor or new employer. Left to themselves, most people will choose the path of gradual evolution. It's less threatening than a complete overhaul and still provides the excitement of learning something (a little bit) new.

With no clear winner and no clear direction to follow, the future of chip design will be determined largely by herd instinct. Engineers will choose what they feel most comfortable with, or what their friends and colleagues seem to be using. Most of all, they want to avoid being left out, choosing a path that will leave them stranded and isolated from the rest of the industry. Macintosh users seem to like this kind of iconoclasm; chip-design engineers dread it.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.84.29