- Web-based hydrological information and analysis
- Small plane landing challenges computer vision systems
- Bioimaging technology for medical advances
What was the distribution of rainfall around the world last year? What are the weather trends on the Asian subcontinent? Although these questions seem reasonable given today’s computing, communications, and monitoring tools, getting answers requires scientists to collect and analyze huge amounts of data from geographically widespread, dissimilar datasets that have different data structures, organizations and formats.
The problem is complicated by the relative autonomy of the organizations collecting the data. Although the organizations, such as national weather bureaus, provide user-friendly access to their data, the interfaces are all different.
Yao Liang and graduate student, Nimmy Ravindran, have been working with a multi-university team funded by the NSF and the NOAA (National Oceanic and Atmospheric Administration) to develop a hydrological integrated data environment (HIDE) system that synthesizes data from diverse sources, provides analysis and visualization tools, and links to external modeling and software systems.
The Virginia Tech team has developed a web-based integration, data analysis and management system using a novel DataNode tree model, which provides a virtual information space supported by flexible query evaluation. The model combines the semantic aspects of the hydrology domain and the logical organization of datasets.
Although the architecture can be applied to many scientific endeavors, their prototype incorporates data from the U.S. Geological Survey (USGS), Germany’s Global Precipit-ation Climatology Centre (GPCC), Canada’s HyDRO, and the Australian Antarctic Automatic Weather Station.
Each module is developed as a Java package with an open interface for flexibility and extensibility. Any additional features required in a module can be easily plugged in without affecting others.
With today’s low-cost cameras and fast CPUs, computer vision shows promise for autonomous and machine-assisted landing of air vehicles, according to computer vision expert Lynn Abbott. Abbott’s team has explored two promising methods for semi-autonomous computer-vision-based landing systems. The left images depict edge-based line detection, which assumes that runway edges correspond to sudden changes in image intensity. The image to the far right is an area-based matching system, which uses the concepts of a 2–D matched filter. After locating the runway in the image, the next step is to track the runway during an approach, and guide the aircraft to a safe landing.
“Although the vision-based approach seems simple at first, it is quite challenging for several reasons. Lighting variations caused by clouds and position of the sun must be addressed. Shadows, seasonal changes, wear and tear of runway surfaces, and lack of uniformity in runway appearance create special challenges.” Although Abbott’s work was sponsored by NASA’s now-cancelled Personal Air Vehicle program, computer vision systems have the potential to provide a good alternative for the many small airports that do not have ground-based radar and as an alternative system for planned redundancy.
A DTI estimate of the direction and density of white matter connections in a brain
Chris Wyatt is developing algorithms for studying the functioning of the brain via diffusion tensor imaging (DTI), which tracks water diffusion in the brain. DTI helps image the brain’s white matter, the nerve fibers that transmit signals within the brain. Nerves in the white matter are surrounded by a layer of fat, which restricts the diffusion of water differently than in other brain regions, giving rise to DTI contrast. DTI provides 3-D analysis of the white matter structure, but its output has significant distortions and noise. Wyatt’s team is developing distortion correction and multi-variate analysis algorithms to help speed the technology into medical practice and basic research.
Algorithms to dissect the blood vessels in a tumor
Microscopic magnification of a tumor blood vessel
Yue Wang’s team developed a novel image processing algorithm for dynamic contrast-enhanced MRI (DCE-MRI) to dissect the microscopic blood vessels that a tumor develops. The process is used to characterize the functions (e.g., perfusion, permeability) of tumor induced microvasculature. Scientists hope to stop cancerous growths by cutting off their blood supply.