By integrating smart functions into currently available medical imaging technology, Chris Wyatt, who joined ECE last summer, hopes to make his contribution toward lower medical costs, new treatments, and earlier disease detection.
With a bachelors degree in electrical engineering and a Ph.D. in biomedical engineering from the Wake Forest School of Medicine, Wyatt works at the interface of engineering and medicine. One of his interests is in developing algorithms and computer interfaces to provide better, quicker, and more relevant images of the body.
Today, doctors and researchers can view the bodys hard and soft tissues through x-ray, ultrasound, computed tomography (CT), and magnetic resonance imaging (MRI) technology. With MRI and positron emission tomography (PET) scans, viewing cellular activity is also possible. (See descriptions of imaging technologies.)
Wyatt is engaged in several efforts: improving imaging for virtual colonoscopies; developing algorithms to replace extensive manual work in imaging for multiple sclerosis (MS) drug trials; and developing image-guided polypectomy technology.
Anatomy, Physiology, & Experience
My efforts are in connecting prior information to analyze the data we get from different imaging, he said. The prior information encompasses anatomy, physiology, and imaging experience. Physicians and radiologists use prior knowledge of the organs and prior experience in reading images, he explained. When they look at an image, even if its not a good image, they impose their knowledge to extract usable information. Weve been working for some time to develop algorithms to incorporate this kind of knowledge into the operating systems of imaging equipment.
He called imaging in some sense, a dumb process. The technologist tells the machine where to image at what resolution. It scans that area and provides the data. If we can make the process a little more intelligent, so that it can make some decisions on its own, we can speed up and improve the end result.
As an example, he described that MS patients develop lesions on the brain that appear as brighter or darker areas on images. To detect and monitor the lesions, we typically image the whole brain at a certain resolution to find the lesions. Unfortunately, to see each lesion well, you need a higher resolution and a particular field-of-view, but to find the lesions, you use a lower resolution and a wider field-of-view.
If the patient has gone to the doctor, it might be a day or more before the image is analyzed by a radiologist, he said. If its a drug study, it might be months there are simply not enough radiologists for the number of images to be analyzed. If the radiologist finds a suspected lesion, the patient could return for another scan. However, such re-imaging is cost prohibitive and, if the initial scan was done a month ago, the lesion may have already changed.
Wyatt wants to replace this scenario with an imaging modality in which the machine can be programmed to detect possible lesions and reset its focus to reacquire data on the spot. This could improve diagnosis, and reduce the time and cost involved in trials of new treatments. If drug trials could be conducted with less cost, the drugs would cost less when they hit the market, he said.
Cutting the Costs of Drug Trials
Drug trials present a significant opportunity for smarter imaging processes. Drug studies involve many hundreds of patients: thats a lot of data. If youre evaluating a new drug for MS and you scan 1,000 patients three times, you have 3,000 sets of data. Can you hire a radiologist to look at all that? You can acquire the data, but pulling out the information you wantsuch as how the lesion is changing is a difficult, time-consuming process that right now is done manually. Technicians look at the images and outline the lesions by hand, he explained.
People have been working on lesion detection and segmentation for some time and have developed some good methods, but its still not good enough to replace all the manual work. Were looking at how to take the raw data and convert it to clinically relevant information. In the case of MS, has the area of the brain changed its characteristics, have any lesions changed?
Computer-aided diagnosis would affect not just the bottom line, but also could improve the quality of evaluations. The problem with evaluating all these images manually is that you use different people at different skill levels at different times of the day. People inherently introduce inconsistencies, whereas computers are consistent and reliable.
New Ways to Avoid the Scope
Wyatt is also pursing imaging advances in the detection and treatment of colon cancer. About 50 percent of todays colon cancer cases could have been prevented with early detection of polyps. Doctors have the screening methods, but compliance is a problem. Colonoscopies are not fun. If we can do the initial screening with more comfortable imaging instead of scoping, we can get higher compliance and detect more cases early, he said.
While earning his Ph.D., Wyatt worked in Wake Forests Virtual Colonoscopy Laboratory, which uses CT data to image the colon. My work is in developing the algorithms that extract the colon data by detecting what part of the slice is abdominal wall, what is colon, and finding the polyps. This is mostly geometric analysis, he explained. He is now interested in migrating to MRI technology, which can give more detailed images.
He is working to extend the virtual colonoscopy technology to image-guided polypectomy. The polyps still need to be surgically removed, but patients who already know they have polyps are much more amenable to enduring a scope. A well-trained endoscopist, if there is no problem with insertion, is very fast and very good. Sometimes, though, the polyps can hide in a fold and finding them can be difficult. If we can use virtual colonoscopy to help guide the endoscope to the polyps, we can help the endoscopists become even better and faster.
The challenge to building smarter imaging systems is the volume of data to be considered. Numerous diagnosis algorithms have been developed since the 1980s, reflecting the many sets of disease/image situations. One of our key issues today is integrating all these algorithms so that an imaging system can employ basic decision-making and analysis functions, Wyatt said.
Another challenge is integrating the analysis algorithms with the actual imaging process. Although were trying to incorporate human experience, we need to avoid the danger of trying to mimic a human too closely, he said. That can take you in a very wrong direction. Human observers still need to be used for the tasks at which they excel.
The quality of the algorithms is critical. We have to make sure that the technology doesnt impede the radiologists, that it doesnt get in the way with false results. We need to avoid creating situations where radiologists resist using the systems, he said.
Wyatt hopes that smarter imaging processes can be useful in devising new imaging devices. If we can use a cheap process, like ultrasound, which has a limited quality of image, and incorporate smarter algorithms to extend its use beyond simple outlines, we can cut the cost of imaging and improve health in even the poorest communities worldwide, he said.