ECE: Electrical & Computer Engineering
ECE News

Research: Computer Systems

Print Version


image of article

(220KB .pdf) Print Version


Meghan Quirk and the loom

Researchers in the e-Textiles laboratory are weaving textiles with embedded wiring, sensors, actuators, and processing elements. Their research includes textiles for health monitoring, monitoring physical therapy, monitoring gait, and sound detection and localization. Above, master's student Meghan Quirk adjusts the group's 40-inch industrial loom.

Several pundits, including New York Times correspondent, Thomas Friedman, have posited that data communications on portable and handheld devices will mark the beginning of the IT revolution. Portable and handheld computing devices will replace desktops, laptops, and cell phones, ultimately morphing into something entirely new.

At the heart of these devices, single-chip computers will provide and integrate a diverse set of applications with growing complexity. These single-chip devices are becoming heterogeneous multiprocessors (HMs) with the potential integrating about 100 individual processing elements on single chips.

This trend creates a crisis in design, since conventional techniques fail to capture the level of abstraction required, according to JoAnn Paul, who heads an ECE effort to develop the science and design tools for HM systems. She was a member of the team that developed Carnegie Mellon’s simulator, called MESH (Modeling Environment for Software and Hardware), which permits designers to study how the numbers and types of processors, communications, scheduling, and software tasks affect the overall performance of an HM.

With a $500,000 grant from the NSF, Paul is working to further develop a design environment that breaks away from “the specify-and-synthesize mindset of current CAD tools” to become a

characterize-and-invent paradigm that still has ties to the lower level IC design methodology flows.

“These HM systems are not fully custom, nor general-purpose designs, nor pure embedded designs,” she said. “These systems are more correctly referred to as ‘scenario-oriented computers,’ requiring that computer system designers start from trend analyses and look for future cross-over points between anticipated feature changes and the physical capabilities of the systems.”

Current efforts include the development of new forms of system-level benchmarking. As applications compete for resources over time, system performance is evaluated and compared to conventional evaluations that average a collection of different programs over time or attempt to meet a real-time specification. “We believe our evaluations are more in accord with the way future systems are likely to be used,” Paulaul said.

Verification, testing, and debugging claim the biggest role in product design, and most system houses and chip design firms report that 70 percent of total cost comes from checking to be sure their design is correct. Researchers around the world are seeking both hardware and software methods of cutting the time needed for testing and verification.

Bradley Fellow Aric Blumer, an experienced diagnostic engineer and ASIC design engineer, is working on a hardware acceleration approach to circuit simulations. “If we can simulate faster, we can test the design more,” he says.

Blumer is investigating the use of virtual machines (VMs) in software and real machines (RMs) in a field-programmable gate array (FPGA), where each executes the same instruction set. The VMs execute serially, and the RMs in parallel. “Simulation characteristics often change as simulations progress,” he says, “and we need the ability to migrate idle parts of the design out of the FPGA and busy ones into it.” He plans to use the run-time reconfigurability of the FPGA to handle the process migration.