Forecasting of flooding and droughts will be improved (right) by integrating remote sensing information (left) with land surface models through the Virginia Tech HIDE data environment (center).
Technology developed in ECE laboratories is serving as the foundation of a new effort to improve flood and drought forecasting in the Ohio River Basin.
"Floods and droughts are two major natural hazards in the Ohio River Basin, and they have major impacts on the region's agriculture, industries, commercial navigation, and residential communities," said Yao Liang, an assistant ECE professor at the Advanced Research Institute (ARI) and principal investigator (PI) on the effort. "There are data and models available from different sources and different systems, that, if integrated, can significantly improve forecasting accuracy and help disaster management."
The integration effort stretches across universities and government organizations, including the National Weather Service (co-PI Thomas Adams is a Virginia Tech alumnus), George Mason University, the University of Pittsburgh, and NASA Goddard Earth Sciences Data Information and Services Center.
The team is developing a system to integrate soil moisture data from NASA satellites and NASA-NOAA land surface models, and a spatial data assimilation framework recently developed at the University of Pittsburgh into the National Weather Service River Forecast System. Surface soil moisture data from multiple satellites in conjunction with the NASA-NOAA models and the data assimilation framework will significantly improve the evapotranspiration rate calculation, which plays a critical role in the river forecast system. The integration will be achieved by extending the hydrological integrated data environment (HIDE) developed by Liang and Nimmy Ravindran.
"Our ultimate goal is to enable the National Weather Service to seamlessly avail itself of the soil moisture data, that was previously unavailable," Liang said. The project results are expected to be extendable to the national level, via the adoption of the system by river forecast centers at both national and regional levels. The $860,499 project is funded by NASA.class="content"> class="widecontent">
Many applications today would benefit from an ability to pick out human faces automatically among other elements in a color image. These include systems that perform face recognition, systems that track faces for videoconferencing or human-computer interaction, and surveillance systems. Unfortunately, both face detection and face recognition are very difficult computational problems, although the human visual system makes it seem easy. The challenges include variations in illumination, size and orientation changes, occlusion, and skin color.
An ECE team has developed a new technique that detects faces based on variations of skin tone, as well as shapes of skin-colored portions of an image. The system does not explicitly attempt to locate face-related features such as noses and eyes. The system relies on an unusual combination of two techniques, known as the Discrete Cosine Transform and a type of artificial neural network known as a Self-Organizing Map.
The technique has dramatically lower computational requirements than more conventional techniques and is well suited for low-cost, real-time hardware implementation, according to Abdallah S. Abdallah, a doctoral student in the VT-MENA program. (VT-MENA is Virginia Tech's graduate program in the Middle East and North Africa.) Abdallah is working with ECE's Lynn Abbott and Mohamad Abou El-Nasr of the Arab Academy for Science and Technology in Alexandria, Egypt.
Abdallah notes that "even though different people have different skin colors, studies have demonstrated that intensity, rather than chrominance (color) is the main distinguishing characteristic." Using segmentation based on skin color eliminates the need for multiresolution image pyramids, which are used in more traditional face-detection standard methods. "Most of the computational load in previous techniques comes from the large sizes of the feature vectors being used," Abdallah says.
Given an intensity representation (a grayscale, or "black-and-white" image), the technique relies on steps of region analysis, feature extraction, and pattern recognition. If faces are present in the image, a self-organizing map (which has been trained in advance) attempts to locate each one.
For benchmarking and testing, the team compiled a new image database, named the VT-AAST image database, consisting of 286 images, containing 1027 faces. The database is currently available on-line for non-commercial use at http://filebox.vt.edu/users/yarab/VT-AAST Database.htm
"Several large image databases are widely available for human face recognition," Abdallah says, "but relatively few are available for face detection." The main difference is that most face-recognition databases contain images that closely resemble "mug-shot" photographs, with the subject facing the camera and centered in the image. More general face-detection systems must deal with individuals who do not face the camera, who may be at various distances from the camera, and may be partially occluded. Each image in the VT-AAST database is available in four formats, including the original color and images segmented by skin color.class="content"> class="widecontent">
Most people think of fingerprints, faces, and gait and iris patterns when they hear of automatic human recognition systems. ECE researchers, however, are investigating using ears for this biometric.
"Ear-based recognition is of particular interest because it is non-invasive and because it is not affected by environmental factors such as mood, health, and clothing," according to Mohamed Saleh, a student in Virginia Tech's graduate program in the Middle East and North Africa (VT-MENA).
"The appearance of the outer ear is relatively unaffected by aging, making it better suited for long-term identification when compared to other non-invasive techniques, such as face recognition," he explains.
Surprisingly, ears have a different appearance for every individualeven for "identical" twins. Ear recognition has not received much attention, he says, but other research teams had indicated that it may be at least comparable to face recognition, at about 71 percent accuracy. Saleh and his advisor, Lynn Abbott, recently applied image-based classifiers and feature-extraction methods to a small dataset of ear images with strongly positive results.
The team used six small (50x40 pixels) grayscale images for 17 individuals, each image showing different luminance and orientation. Comparing seven different image recognition techniques, they achieved accuracies ranging from 76 percent to 94 percent.
"We believe ear recognition has significant potential," Saleh says. Further research needs to be done in the area, using larger datasets, and considering problems of interfering hair, clothing, jewelry, eyeglasses and other artifacts, he says. For future research, the team is considering "multimodal" approaches for recognition that involve ears and faces simultaneously.