Sample records for visualization method based

  1. Comparison of Text-Based and Visual-Based Programming Input Methods for First-Time Learners

    ERIC Educational Resources Information Center

    Saito, Daisuke; Washizaki, Hironori; Fukazawa, Yoshiaki

    2017-01-01

    Aim/Purpose: When learning to program, both text-based and visual-based input methods are common. However, it is unclear which method is more appropriate for first-time learners (first learners). Background: The differences in the learning effect between text-based and visual-based input methods for first learners are compared the using a…

  2. Visualization of medical data based on EHR standards.

    PubMed

    Kopanitsa, G; Hildebrand, C; Stausberg, J; Englmeier, K H

    2013-01-01

    To organize an efficient interaction between a doctor and an EHR the data has to be presented in the most convenient way. Medical data presentation methods and models must be flexible in order to cover the needs of the users with different backgrounds and requirements. Most visualization methods are doctor oriented, however, there are indications that the involvement of patients can optimize healthcare. The research aims at specifying the state of the art of medical data visualization. The paper analyzes a number of projects and defines requirements for a generic ISO 13606 based data visualization method. In order to do so it starts with a systematic search for studies on EHR user interfaces. In order to identify best practices visualization methods were evaluated according to the following criteria: limits of application, customizability, re-usability. The visualization methods were compared by using specified criteria. The review showed that the analyzed projects can contribute knowledge to the development of a generic visualization method. However, none of them proposed a model that meets all the necessary criteria for a re-usable standard based visualization method. The shortcomings were mostly related to the structure of current medical concept specifications. The analysis showed that medical data visualization methods use hardcoded GUI, which gives little flexibility. So medical data visualization has to turn from a hardcoded user interface to generic methods. This requires a great effort because current standards are not suitable for organizing the management of visualization data. This contradiction between a generic method and a flexible and user-friendly data layout has to be overcome.

  3. Visual attention based bag-of-words model for image classification

    NASA Astrophysics Data System (ADS)

    Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che

    2014-04-01

    Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.

  4. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    NASA Astrophysics Data System (ADS)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  5. A Visual Analytics Approach for Station-Based Air Quality Data

    PubMed Central

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-01-01

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117

  6. A Visual Analytics Approach for Station-Based Air Quality Data.

    PubMed

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-12-24

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.

  7. Research on metallic material defect detection based on bionic sensing of human visual properties

    NASA Astrophysics Data System (ADS)

    Zhang, Pei Jiang; Cheng, Tao

    2018-05-01

    Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.

  8. Estimation of bio-signal based on human motion for integrated visualization of daily-life.

    PubMed

    Umetani, Tomohiro; Matsukawa, Tsuyoshi; Yokoyama, Kiyoko

    2007-01-01

    This paper describes a method for the estimation of bio-signals based on human motion in daily life for an integrated visualization system. The recent advancement of computers and measurement technology has facilitated the integrated visualization of bio-signals and human motion data. It is desirable to obtain a method to understand the activities of muscles based on human motion data and evaluate the change in physiological parameters according to human motion for visualization applications. We suppose that human motion is generated by the activities of muscles reflected from the brain to bio-signals such as electromyograms. This paper introduces a method for the estimation of bio-signals based on neural networks. This method can estimate the other physiological parameters based on the same procedure. The experimental results show the feasibility of the proposed method.

  9. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  10. Students’ thinking preferences in solving mathematics problems based on learning styles: a comparison of paper-pencil and geogebra

    NASA Astrophysics Data System (ADS)

    Farihah, Umi

    2018-04-01

    The purpose of this study was to analyze students’ thinking preferences in solving mathematics problems using paper pencil comparing to geogebra based on their learning styles. This research employed a qualitative descriptive study. The subjects of this research was six of eighth grade students of Madrasah Tsanawiyah Negeri 2 Trenggalek, East Java Indonesia academic year 2015-2016 with their difference learning styles; two visual students, two auditory students, and two kinesthetic students.. During the interview, the students presented the Paper and Pencil-based Task (PBTs) and the Geogebra-based Task (GBTs). By investigating students’ solution methods and the representation in solving the problems, the researcher compared their visual and non-visual thinking preferences in solving mathematics problems while they were using Geogebra and without Geogebra. Based on the result of research analysis, it was shown that the comparison between students’ PBTs and GBTs solution either visual, auditory, or kinesthetic represented how Geogebra can influence their solution method. By using Geogebra, they prefer using visual method while presenting GBTs to using non-visual method.

  11. An Approach of Web-based Point Cloud Visualization without Plug-in

    NASA Astrophysics Data System (ADS)

    Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei

    2016-11-01

    With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.

  12. Visual attention capacity: a review of TVA-based patient studies.

    PubMed

    Habekost, Thomas; Starrfelt, Randi

    2009-02-01

    Psychophysical studies have identified two distinct limitations of visual attention capacity: processing speed and apprehension span. Using a simple test, these cognitive factors can be analyzed by Bundesen's Theory of Visual Attention (TVA). The method has strong specificity and sensitivity, and measurements are highly reliable. As the method is theoretically founded, it also has high validity. TVA-based assessment has recently been used to investigate a broad range of neuropsychological and neurological conditions. We present the method, including the experimental paradigm and practical guidelines to patient testing, and review existing TVA-based patient studies organized by lesion anatomy. Lesions in three anatomical regions affect visual capacity: The parietal lobes, frontal cortex and basal ganglia, and extrastriate cortex. Visual capacity thus depends on large, bilaterally distributed anatomical networks that include several regions outside the visual system. The two visual capacity parameters are functionally separable, but seem to rely on largely overlapping brain areas.

  13. Systems and Methods for Data Visualization Using Three-Dimensional Displays

    NASA Technical Reports Server (NTRS)

    Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)

    2017-01-01

    Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.

  14. Visualizing the movement of the contact between vocal folds during vibration by using array-based transmission ultrasonic glottography

    PubMed Central

    Jing, Bowen; Chigan, Pengju; Ge, Zhengtong; Wu, Liang; Wang, Supin; Wan, Mingxi

    2017-01-01

    For the purpose of noninvasively visualizing the dynamics of the contact between vibrating vocal fold medial surfaces, an ultrasonic imaging method which is referred to as array-based transmission ultrasonic glottography is proposed. An array of ultrasound transducers is used to detect the ultrasound wave transmitted from one side of the vocal folds to the other side through the small-sized contact between the vocal folds. A passive acoustic mapping method is employed to visualize and locate the contact. The results of the investigation using tissue-mimicking phantoms indicate that it is feasible to use the proposed method to visualize and locate the contact between soft tissues. Furthermore, the proposed method was used for investigating the movement of the contact between the vibrating vocal folds of excised canine larynges. The results indicate that the vertical movement of the contact can be visualized as a vertical movement of a high-intensity stripe in a series of images obtained by using the proposed method. Moreover, a visualization and analysis method, which is referred to as array-based ultrasonic kymography, is presented. The velocity of the vertical movement of the contact, which is estimated from the array-based ultrasonic kymogram, could reach 0.8 m/s during the vocal fold vibration. PMID:28599522

  15. Eventogram: A Visual Representation of Main Events in Biomedical Signals.

    PubMed

    Elgendi, Mohamed

    2016-09-22

    Biomedical signals carry valuable physiological information and many researchers have difficulty interpreting and analyzing long-term, one-dimensional, quasi-periodic biomedical signals. Traditionally, biomedical signals are analyzed and visualized using periodogram, spectrogram, and wavelet methods. However, these methods do not offer an informative visualization of main events within the processed signal. This paper attempts to provide an event-related framework to overcome the drawbacks of the traditional visualization methods and describe the main events within the biomedical signal in terms of duration and morphology. Electrocardiogram and photoplethysmogram signals are used in the analysis to demonstrate the differences between the traditional visualization methods, and their performance is compared against the proposed method, referred to as the " eventogram " in this paper. The proposed method is based on two event-related moving averages that visualizes the main time-domain events in the processed biomedical signals. The traditional visualization methods were unable to find dominant events in processed signals while the eventogram was able to visualize dominant events in signals in terms of duration and morphology. Moreover, eventogram -based detection algorithms succeeded with detecting main events in different biomedical signals with a sensitivity and positive predictivity >95%. The output of the eventogram captured unique patterns and signatures of physiological events, which could be used to visualize and identify abnormal waveforms in any quasi-periodic signal.

  16. A Novel Locally Linear KNN Method With Applications to Visual Recognition.

    PubMed

    Liu, Qingfeng; Liu, Chengjun

    2017-09-01

    A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.

  17. Estimating Starch Content in Roots of Deciduous Trees--A Visual Technique

    Treesearch

    Philip M. Wargo; Philip M. Wargo

    1975-01-01

    A visual technique for determining starch content in roots of forest trees, based onz iodine-staining of starch granules, was compared with a chemical method. Although the chemical method was more precise, roots could be sorted with the visual method into groups that are probably biologically important. The visual technique is simple and can be adapted for use in the...

  18. Examination of a Method to Determine the Reference Region for Calculating the Specific Binding Ratio in Dopamine Transporter Imaging.

    PubMed

    Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu

    2017-01-01

    The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.

  19. Retinotopic Maps, Spatial Tuning, and Locations of Human Visual Areas in Surface Coordinates Characterized with Multifocal and Blocked fMRI Designs

    PubMed Central

    Henriksson, Linda; Karvonen, Juha; Salminen-Vaparanta, Niina; Railo, Henry; Vanni, Simo

    2012-01-01

    The localization of visual areas in the human cortex is typically based on mapping the retinotopic organization with functional magnetic resonance imaging (fMRI). The most common approach is to encode the response phase for a slowly moving visual stimulus and to present the result on an individual's reconstructed cortical surface. The main aims of this study were to develop complementary general linear model (GLM)-based retinotopic mapping methods and to characterize the inter-individual variability of the visual area positions on the cortical surface. We studied 15 subjects with two methods: a 24-region multifocal checkerboard stimulus and a blocked presentation of object stimuli at different visual field locations. The retinotopic maps were based on weighted averaging of the GLM parameter estimates for the stimulus regions. In addition to localizing visual areas, both methods could be used to localize multiple retinotopic regions-of-interest. The two methods yielded consistent retinotopic maps in the visual areas V1, V2, V3, hV4, and V3AB. In the higher-level areas IPS0, VO1, LO1, LO2, TO1, and TO2, retinotopy could only be mapped with the blocked stimulus presentation. The gradual widening of spatial tuning and an increase in the responses to stimuli in the ipsilateral visual field along the hierarchy of visual areas likely reflected the increase in the average receptive field size. Finally, after registration to Freesurfer's surface-based atlas of the human cerebral cortex, we calculated the mean and variability of the visual area positions in the spherical surface-based coordinate system and generated probability maps of the visual areas on the average cortical surface. The inter-individual variability in the area locations decreased when the midpoints were calculated along the spherical cortical surface compared with volumetric coordinates. These results can facilitate both analysis of individual functional anatomy and comparisons of visual cortex topology across studies. PMID:22590626

  20. A Lightweight Remote Parallel Visualization Platform for Interactive Massive Time-varying Climate Data Analysis

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhang, T.; Huang, Q.; Liu, Q.

    2014-12-01

    Today's climate datasets are featured with large volume, high degree of spatiotemporal complexity and evolving fast overtime. As visualizing large volume distributed climate datasets is computationally intensive, traditional desktop based visualization applications fail to handle the computational intensity. Recently, scientists have developed remote visualization techniques to address the computational issue. Remote visualization techniques usually leverage server-side parallel computing capabilities to perform visualization tasks and deliver visualization results to clients through network. In this research, we aim to build a remote parallel visualization platform for visualizing and analyzing massive climate data. Our visualization platform was built based on Paraview, which is one of the most popular open source remote visualization and analysis applications. To further enhance the scalability and stability of the platform, we have employed cloud computing techniques to support the deployment of the platform. In this platform, all climate datasets are regular grid data which are stored in NetCDF format. Three types of data access methods are supported in the platform: accessing remote datasets provided by OpenDAP servers, accessing datasets hosted on the web visualization server and accessing local datasets. Despite different data access methods, all visualization tasks are completed at the server side to reduce the workload of clients. As a proof of concept, we have implemented a set of scientific visualization methods to show the feasibility of the platform. Preliminary results indicate that the framework can address the computation limitation of desktop based visualization applications.

  1. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    PubMed

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  2. Distance-based microfluidic quantitative detection methods for point-of-care testing.

    PubMed

    Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James

    2016-04-07

    Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.

  3. Terminology model discovery using natural language processing and visualization techniques.

    PubMed

    Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol

    2006-12-01

    Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.

  4. Advances in visual representation of molecular potentials.

    PubMed

    Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen

    2010-06-01

    The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.

  5. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    PubMed Central

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  6. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  7. Computer systems and methods for the query and visualization multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2017-04-25

    A method of generating a data visualization is performed at a computer having a display, one or more processors, and memory. The memory stores one or more programs for execution by the one or more processors. The process receives user specification of a plurality of characteristics of a data visualization. The data visualization is based on data from a multidimensional database. The characteristics specify at least x-position and y-position of data marks corresponding to tuples of data retrieved from the database. The process generates a data visualization according to the specified plurality of characteristics. The data visualization has an x-axis defined based on data for one or more first fields from the database that specify x-position of the data marks and the data visualization has a y-axis defined based on data for one or more second fields from the database that specify y-position of the data marks.

  8. A BHR Composite Network-Based Visualization Method for Deformation Risk Level of Underground Space

    PubMed Central

    Zheng, Wei; Zhang, Xiaoya; Lu, Qi

    2015-01-01

    This study proposes a visualization processing method for the deformation risk level of underground space. The proposed method is based on a BP-Hopfield-RGB (BHR) composite network. Complex environmental factors are integrated in the BP neural network. Dynamic monitoring data are then automatically classified in the Hopfield network. The deformation risk level is combined with the RGB color space model and is displayed visually in real time, after which experiments are conducted with the use of an ultrasonic omnidirectional sensor device for structural deformation monitoring. The proposed method is also compared with some typical methods using a benchmark dataset. Results show that the BHR composite network visualizes the deformation monitoring process in real time and can dynamically indicate dangerous zones. PMID:26011618

  9. Modeling and visualizing borehole information on virtual globes using KML

    NASA Astrophysics Data System (ADS)

    Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing

    2014-01-01

    Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.

  10. An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.

    PubMed

    Brito da Silva, Leonardo Enzo; Wunsch, Donald C

    2018-06-01

    Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.

  11. Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition

    PubMed Central

    Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao

    2017-01-01

    Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943

  12. Explanatory and illustrative visualization of special and general relativity.

    PubMed

    Weiskopf, Daniel; Borchers, Marc; Ertl, Thomas; Falk, Martin; Fechtig, Oliver; Frank, Regine; Grave, Frank; King, Andreas; Kraus, Ute; Müller, Thomas; Nollert, Hans-Peter; Rica Mendez, Isabel; Ruder, Hanns; Schafhitzel, Tobias; Schär, Sonja; Zahn, Corvin; Zatloukal, Michael

    2006-01-01

    This paper describes methods for explanatory and illustrative visualizations used to communicate aspects of Einstein's theories of special and general relativity, their geometric structure, and of the related fields of cosmology and astrophysics. Our illustrations target a general audience of laypersons interested in relativity. We discuss visualization strategies, motivated by physics education and the didactics of mathematics, and describe what kind of visualization methods have proven to be useful for different types of media, such as still images in popular science magazines, film contributions to TV shows, oral presentations, or interactive museum installations. Our primary approach is to adopt an egocentric point of view: The recipients of a visualization participate in a visually enriched thought experiment that allows them to experience or explore a relativistic scenario. In addition, we often combine egocentric visualizations with more abstract illustrations based on an outside view in order to provide several presentations of the same phenomenon. Although our visualization tools often build upon existing methods and implementations, the underlying techniques have been improved by several novel technical contributions like image-based special relativistic rendering on GPUs, special relativistic 4D ray tracing for accelerating scene objects, an extension of general relativistic ray tracing to manifolds described by multiple charts, GPU-based interactive visualization of gravitational light deflection, as well as planetary terrain rendering. The usefulness and effectiveness of our visualizations are demonstrated by reporting on experiences with, and feedback from, recipients of visualizations and collaborators.

  13. Dictionary Pruning with Visual Word Significance for Medical Image Retrieval

    PubMed Central

    Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G.; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei

    2016-01-01

    Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency. PMID:27688597

  14. Dictionary Pruning with Visual Word Significance for Medical Image Retrieval.

    PubMed

    Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei

    2016-02-12

    Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.

  15. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  16. Tile-based parallel coordinates and its application in financial visualization

    NASA Astrophysics Data System (ADS)

    Alsakran, Jamal; Zhao, Ye; Zhao, Xinlei

    2010-01-01

    Parallel coordinates technique has been widely used in information visualization applications and it has achieved great success in visualizing multivariate data and perceiving their trends. Nevertheless, visual clutter usually weakens or even diminishes its ability when the data size increases. In this paper, we first propose a tile-based parallel coordinates, where the plotting area is divided into rectangular tiles. Each tile stores an intersection density that counts the total number of polylines intersecting with that tile. Consequently, the intersection density is mapped to optical attributes, such as color and opacity, by interactive transfer functions. The method visualizes the polylines efficiently and informatively in accordance with the density distribution, and thus, reduces visual cluttering and promotes knowledge discovery. The interactivity of our method allows the user to instantaneously manipulate the tiles distribution and the transfer functions. Specifically, the classic parallel coordinates rendering is a special case of our method when each tile represents only one pixel. A case study on a real world data set, U.S. stock mutual fund data of year 2006, is presented to show the capability of our method in visually analyzing financial data. The presented visual analysis is conducted by an expert in the domain of finance. Our method gains the support from professionals in the finance field, they embrace it as a potential investment analysis tool for mutual fund managers, financial planners, and investors.

  17. Learning semantic and visual similarity for endomicroscopy video retrieval.

    PubMed

    Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2012-06-01

    Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them. In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.

  18. 2011 IEEE Visualization Contest winner: Visualizing unsteady vortical behavior of a centrifugal pump.

    PubMed

    Otto, Mathias; Kuhn, Alexander; Engelke, Wito; Theisel, Holger

    2012-01-01

    In the 2011 IEEE Visualization Contest, the dataset represented a high-resolution simulation of a centrifugal pump operating below optimal speed. The goal was to find suitable visualization techniques to identify regions of rotating stall that impede the pump's effectiveness. The winning entry split analysis of the pump into three parts based on the pump's functional behavior. It then applied local and integration-based methods to communicate the unsteady flow behavior in different regions of the dataset. This research formed the basis for a comparison of common vortex extractors and more recent methods. In particular, integration-based methods (separation measures, accumulated scalar fields, particle path lines, and advection textures) are well suited to capture the complex time-dependent flow behavior. This video (http://youtu.be/oD7QuabY0oU) shows simulations of unsteady flow in a centrifugal pump.

  19. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  20. Visualization of the Mode Shapes of Pressure Oscillation in a Cylindrical Cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Xin; Qi, Yunliang; Wang, Zhi

    Our work describes a novel experimental method to visualize the mode shapes of pressure oscillation in a cylindrical cavity. Acoustic resonance in a cavity is a grand old problem that has been under investigation (using both analytical and numerical methods) for more than a century. In this article, a novel method based on high speed imaging of combustion chemiluminescence was presented to visualize the mode shapes of pressure oscillation in a cylindrical cavity. By generating high-temperature combustion gases and strong pressure waves simultaneously in a cylindrical cavity, the pressure oscillation can be inferred due to the chemiluminescence emissions of themore » combustion products. We can then visualized the mode shapes by reconstructing the images based on the amplitudes of the luminosity spectrum at the corresponding resonant frequencies. Up to 11 resonant mode shapes were clearly visualized, each matching very well with the analytical solutions.« less

  1. Multi-scale image segmentation method with visual saliency constraints and its application

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.

  2. A neural-visualization IDS for honeynet data.

    PubMed

    Herrero, Álvaro; Zurutuza, Urko; Corchado, Emilio

    2012-04-01

    Neural intelligent systems can provide a visualization of the network traffic for security staff, in order to reduce the widely known high false-positive rate associated with misuse-based Intrusion Detection Systems (IDSs). Unlike previous work, this study proposes an unsupervised neural models that generate an intuitive visualization of the captured traffic, rather than network statistics. These snapshots of network events are immensely useful for security personnel that monitor network behavior. The system is based on the use of different neural projection and unsupervised methods for the visual inspection of honeypot data, and may be seen as a complementary network security tool that sheds light on internal data structures through visual inspection of the traffic itself. Furthermore, it is intended to facilitate verification and assessment of Snort performance (a well-known and widely-used misuse-based IDS), through the visualization of attack patterns. Empirical verification and comparison of the proposed projection methods are performed in a real domain, where two different case studies are defined and analyzed.

  3. An Evaluation of a Computer-Based Training on the Visual Analysis of Single-Subject Data

    ERIC Educational Resources Information Center

    Snyder, Katie

    2013-01-01

    Visual analysis is the primary method of analyzing data in single-subject methodology, which is the predominant research method used in the fields of applied behavior analysis and special education. Previous research on the reliability of visual analysis suggests that judges often disagree about what constitutes an intervention effect. Considering…

  4. A Recommended Engineering Application of the Method for Evaluating the Visual Significance of Reflected Glare.

    ERIC Educational Resources Information Center

    Blackwell, H. Richard

    1963-01-01

    An application method for evaluating the visual significance of reflected glare is described, based upon a number of decisions with respect to the relative importance of various aspects of visual performance. A standardized procedure for evaluating the overall effectiveness of lighting from photometric data on materials or installations is needed…

  5. Introduction to Vector Field Visualization

    NASA Technical Reports Server (NTRS)

    Kao, David; Shen, Han-Wei

    2010-01-01

    Vector field visualization techniques are essential to help us understand the complex dynamics of flow fields. These can be found in a wide range of applications such as study of flows around an aircraft, the blood flow in our heart chambers, ocean circulation models, and severe weather predictions. The vector fields from these various applications can be visually depicted using a number of techniques such as particle traces and advecting textures. In this tutorial, we present several fundamental algorithms in flow visualization including particle integration, particle tracking in time-dependent flows, and seeding strategies. For flows near surfaces, a wide variety of synthetic texture-based algorithms have been developed to depict near-body flow features. The most common approach is based on the Line Integral Convolution (LIC) algorithm. There also exist extensions of LIC to support more flexible texture generations for 3D flow data. This tutorial reviews these algorithms. Tensor fields are found in several real-world applications and also require the aid of visualization to help users understand their data sets. Examples where one can find tensor fields include mechanics to see how material respond to external forces, civil engineering and geomechanics of roads and bridges, and the study of neural pathway via diffusion tensor imaging. This tutorial will provide an overview of the different tensor field visualization techniques, discuss basic tensor decompositions, and go into detail on glyph based methods, deformation based methods, and streamline based methods. Practical examples will be used when presenting the methods; and applications from some case studies will be used as part of the motivation.

  6. Visual improvement for bad handwriting based on Monte-Carlo method

    NASA Astrophysics Data System (ADS)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2014-03-01

    A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.

  7. [Use of liquid crystal eyeglasses for examination and recovery of binocular vision].

    PubMed

    Grigorian, A Iu; Avetisov, E S; Kashchenko, T P; Iachmeneva, E I

    1999-01-01

    A new method for diploptic treatment of strabismus is proposed, based on phase division of visual fields using liquid crystal eyeglasses --computer complex. The method is based on stereovision training (allowing stereothreshold measurements up to 150 ang. sec.). The method was tried in examinations of two groups of children: 10 controls and 74 patients with strabismus. Examinations of normal controls gave new criteria for measuring fusion reserves and stereovisual acuity by the proposed method. The therapeutic method was tried in 2 groups of patients. Time course of visual function improvement was followed up by several criteria: changes in binocular status by the color test and improvement of in-depth and stereoscopic visual acuity. The method is recommended for practice. The authors discuss the problem of small angle strabismus.

  8. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  9. Infrared and visible image fusion based on visual saliency map and weighted least square optimization

    NASA Astrophysics Data System (ADS)

    Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua

    2017-05-01

    The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.

  10. Visual Fast Mapping in School-Aged Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Alt, Mary

    2013-01-01

    Purpose: To determine whether children with specific language impairment (SLI) demonstrate impaired visual fast mapping skills compared with unimpaired peers and to test components of visual working memory that may contribute to a visual working memory deficit. Methods: Fifty children (25 SLI) played 2 computer-based visual fast mapping games…

  11. A 3D particle visualization system for temperature management

    NASA Astrophysics Data System (ADS)

    Lange, B.; Rodriguez, N.; Puech, W.; Rey, H.; Vasques, X.

    2011-01-01

    This paper deals with a 3D visualization technique proposed to analyze and manage energy efficiency from a data center. Data are extracted from sensors located in the IBM Green Data Center in Montpellier France. These sensors measure different information such as hygrometry, pressure and temperature. We want to visualize in real-time the large among of data produced by these sensors. A visualization engine has been designed, based on particles system and a client server paradigm. In order to solve performance problems, a Level Of Detail solution has been developed. These methods are based on the earlier work introduced by J. Clark in 1976. In this paper we introduce a particle method used for this work and subsequently we explain different simplification methods applied to improve our solution.

  12. A novel wide-field-of-view display method with higher central resolution for hyper-realistic head dome projector

    NASA Astrophysics Data System (ADS)

    Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko

    2007-02-01

    In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.

  13. Effect of higher frequency on the classification of steady-state visual evoked potentials

    NASA Astrophysics Data System (ADS)

    Won, Dong-Ok; Hwang, Han-Jeong; Dähne, Sven; Müller, Klaus-Robert; Lee, Seong-Whan

    2016-02-01

    Objective. Most existing brain-computer interface (BCI) designs based on steady-state visual evoked potentials (SSVEPs) primarily use low frequency visual stimuli (e.g., <20 Hz) to elicit relatively high SSVEP amplitudes. While low frequency stimuli could evoke photosensitivity-based epileptic seizures, high frequency stimuli generally show less visual fatigue and no stimulus-related seizures. The fundamental objective of this study was to investigate the effect of stimulation frequency and duty-cycle on the usability of an SSVEP-based BCI system. Approach. We developed an SSVEP-based BCI speller using multiple LEDs flickering with low frequencies (6-14.9 Hz) with a duty-cycle of 50%, or higher frequencies (26-34.7 Hz) with duty-cycles of 50%, 60%, and 70%. The four different experimental conditions were tested with 26 subjects in order to investigate the impact of stimulation frequency and duty-cycle on performance and visual fatigue, and evaluated with a questionnaire survey. Resting state alpha powers were utilized to interpret our results from the neurophysiological point of view. Main results. The stimulation method employing higher frequencies not only showed less visual fatigue, but it also showed higher and more stable classification performance compared to that employing relatively lower frequencies. Different duty-cycles in the higher frequency stimulation conditions did not significantly affect visual fatigue, but a duty-cycle of 50% was a better choice with respect to performance. The performance of the higher frequency stimulation method was also less susceptible to resting state alpha powers, while that of the lower frequency stimulation method was negatively correlated with alpha powers. Significance. These results suggest that the use of higher frequency visual stimuli is more beneficial for performance improvement and stability as time passes when developing practical SSVEP-based BCI applications.

  14. Effect of higher frequency on the classification of steady-state visual evoked potentials.

    PubMed

    Won, Dong-Ok; Hwang, Han-Jeong; Dähne, Sven; Müller, Klaus-Robert; Lee, Seong-Whan

    2016-02-01

    Most existing brain-computer interface (BCI) designs based on steady-state visual evoked potentials (SSVEPs) primarily use low frequency visual stimuli (e.g., <20 Hz) to elicit relatively high SSVEP amplitudes. While low frequency stimuli could evoke photosensitivity-based epileptic seizures, high frequency stimuli generally show less visual fatigue and no stimulus-related seizures. The fundamental objective of this study was to investigate the effect of stimulation frequency and duty-cycle on the usability of an SSVEP-based BCI system. We developed an SSVEP-based BCI speller using multiple LEDs flickering with low frequencies (6-14.9 Hz) with a duty-cycle of 50%, or higher frequencies (26-34.7 Hz) with duty-cycles of 50%, 60%, and 70%. The four different experimental conditions were tested with 26 subjects in order to investigate the impact of stimulation frequency and duty-cycle on performance and visual fatigue, and evaluated with a questionnaire survey. Resting state alpha powers were utilized to interpret our results from the neurophysiological point of view. The stimulation method employing higher frequencies not only showed less visual fatigue, but it also showed higher and more stable classification performance compared to that employing relatively lower frequencies. Different duty-cycles in the higher frequency stimulation conditions did not significantly affect visual fatigue, but a duty-cycle of 50% was a better choice with respect to performance. The performance of the higher frequency stimulation method was also less susceptible to resting state alpha powers, while that of the lower frequency stimulation method was negatively correlated with alpha powers. These results suggest that the use of higher frequency visual stimuli is more beneficial for performance improvement and stability as time passes when developing practical SSVEP-based BCI applications.

  15. Appraising the reliability of visual impact assessment methods

    Treesearch

    Nickolaus R. Feimer; Kenneth H. Craik; Richard C. Smardon; Stephen R.J. Sheppard

    1979-01-01

    This paper presents the research approach and selected results of an empirical investigation aimed at the evaluation of selected observer-based visual impact assessment (VIA) methods. The VIA methods under examination were chosen to cover a range of VIA methods currently in use in both applied and research settings. Variation in three facets of VIA methods were...

  16. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting.

    PubMed

    Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin

    2011-11-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights.

  17. News video story segmentation method using fusion of audio-visual features

    NASA Astrophysics Data System (ADS)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  18. Visualization techniques for tongue analysis in traditional Chinese medicine

    NASA Astrophysics Data System (ADS)

    Pham, Binh L.; Cai, Yang

    2004-05-01

    Visual inspection of the tongue has been an important diagnostic method of Traditional Chinese Medicine (TCM). Clinic data have shown significant connections between various viscera cancers and abnormalities in the tongue and the tongue coating. Visual inspection of the tongue is simple and inexpensive, but the current practice in TCM is mainly experience-based and the quality of the visual inspection varies between individuals. The computerized inspection method provides quantitative models to evaluate color, texture and surface features on the tongue. In this paper, we investigate visualization techniques and processes to allow interactive data analysis with the aim to merge computerized measurements with human expert's diagnostic variables based on five-scale diagnostic conditions: Healthy (H), History Cancers (HC), History of Polyps (HP), Polyps (P) and Colon Cancer (C).

  19. Remote sensing image ship target detection method based on visual attention model

    NASA Astrophysics Data System (ADS)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  20. Visual navigation using edge curve matching for pinpoint planetary landing

    NASA Astrophysics Data System (ADS)

    Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei

    2018-05-01

    Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.

  1. Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator

    NASA Astrophysics Data System (ADS)

    Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi

    Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.

  2. Analysis of visual quality improvements provided by known tools for HDR content

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Alshina, Elena; Lee, JongSeok; Park, Youngo; Choi, Kwang Pyo

    2016-09-01

    In this paper, the visual quality of different solutions for high dynamic range (HDR) compression using MPEG test contents is analyzed. We also simulate the method for an efficient HDR compression which is based on statistical property of the signal. The method is compliant with HEVC specification and also easily compatible with other alternative methods which might require HEVC specification changes. It was subjectively tested on commercial TVs and compared with alternative solutions for HDR coding. Subjective visual quality tests were performed using SUHD TVs model which is SAMSUNG JS9500 with maximum luminance up to 1000nit in test. The solution that is based on statistical property shows not only improvement of objective performance but improvement of visual quality compared to other HDR solutions, while it is compatible with HEVC specification.

  3. StreamExplorer: A Multi-Stage System for Visually Exploring Events in Social Streams.

    PubMed

    Wu, Yingcai; Chen, Zhutian; Sun, Guodao; Xie, Xiao; Cao, Nan; Liu, Shixia; Cui, Weiwei

    2017-10-18

    Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present StreamExplorer to facilitate the visual analysis, tracking, and comparison of a social stream at three levels. At a macroscopic level, StreamExplorer uses a new glyph-based timeline visualization, which presents a quick multi-faceted overview of the ebb and flow of a social stream. At a mesoscopic level, a map visualization is employed to visually summarize the social stream from either a topical or geographical aspect. At a microscopic level, users can employ interactive lenses to visually examine and explore the social stream from different perspectives. Two case studies and a task-based evaluation are used to demonstrate the effectiveness and usefulness of StreamExplorer.Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present StreamExplorer to facilitate the visual analysis, tracking, and comparison of a social stream at three levels. At a macroscopic level, StreamExplorer uses a new glyph-based timeline visualization, which presents a quick multi-faceted overview of the ebb and flow of a social stream. At a mesoscopic level, a map visualization is employed to visually summarize the social stream from either a topical or geographical aspect. At a microscopic level, users can employ interactive lenses to visually examine and explore the social stream from different perspectives. Two case studies and a task-based evaluation are used to demonstrate the effectiveness and usefulness of StreamExplorer.

  4. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  5. Visual detection of glial cell line-derived neurotrophic factor based on a molecular translator and isothermal strand-displacement polymerization reaction.

    PubMed

    Zhang, Li-Yong; Xing, Tao; Du, Li-Xin; Li, Qing-Min; Liu, Wei-Dong; Wang, Ji-Yue; Cai, Jing

    2015-01-01

    Glial cell line-derived neurotrophic factor (GDNF) is a small protein that potently promotes the survival of many types of neurons. Detection of GDNF is vital to monitoring the survival of sympathetic and sensory neurons. However, the specific method for GDNF detection is also un-discovered. The purpose of this study is to explore the method for protein detection of GDNF. A novel visual detection method based on a molecular translator and isothermal strand-displacement polymerization reaction (ISDPR) has been proposed for the detection of GDNF. In this study, a molecular translator was employed to convert the input protein to output deoxyribonucleic acid signal, which was further amplified by ISDPR. The product of ISDPR was detected by a lateral flow biosensor within 30 minutes. This novel visual detection method based on a molecular translator and ISDPR has very high sensitivity and selectivity, with a dynamic response ranging from 1 pg/mL to 10 ng/mL, and the detection limit was 1 pg/mL of GDNF. This novel visual detection method exhibits high sensitivity and selectivity, which is very simple and universal for GDNF detection to help disease therapy in clinical practice.

  6. Visual homing with a pan-tilt based stereo camera

    NASA Astrophysics Data System (ADS)

    Nirmal, Paramesh; Lyons, Damian M.

    2013-01-01

    Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.

  7. Method for evaluation of human induced pluripotent stem cell quality using image analysis based on the biological morphology of cells.

    PubMed

    Wakui, Takashi; Matsumoto, Tsuyoshi; Matsubara, Kenta; Kawasaki, Tomoyuki; Yamaguchi, Hiroshi; Akutsu, Hidenori

    2017-10-01

    We propose an image analysis method for quality evaluation of human pluripotent stem cells based on biologically interpretable features. It is important to maintain the undifferentiated state of induced pluripotent stem cells (iPSCs) while culturing the cells during propagation. Cell culture experts visually select good quality cells exhibiting the morphological features characteristic of undifferentiated cells. Experts have empirically determined that these features comprise prominent and abundant nucleoli, less intercellular spacing, and fewer differentiating cellular nuclei. We quantified these features based on experts' visual inspection of phase contrast images of iPSCs and found that these features are effective for evaluating iPSC quality. We then developed an iPSC quality evaluation method using an image analysis technique. The method allowed accurate classification, equivalent to visual inspection by experts, of three iPSC cell lines.

  8. Extraction of composite visual objects from audiovisual materials

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Thienot, Cedric; Faudemay, Pascal

    1999-08-01

    An effective analysis of Visual Objects appearing in still images and video frames is required in order to offer fine grain access to multimedia and audiovisual contents. In previous papers, we showed how our method for segmenting still images into visual objects could improve content-based image retrieval and video analysis methods. Visual Objects are used in particular for extracting semantic knowledge about the contents. However, low-level segmentation methods for still images are not likely to extract a complex object as a whole but instead as a set of several sub-objects. For example, a person would be segmented into three visual objects: a face, hair, and a body. In this paper, we introduce the concept of Composite Visual Object. Such an object is hierarchically composed of sub-objects called Component Objects.

  9. Determining the Appropriateness of Visual Based Activities in the Primary School Books for Low Vision Students

    ERIC Educational Resources Information Center

    Cakmak, Salih; Yilmaz, Hatice Cansu; Isitan, Hacer Damlanur

    2017-01-01

    The general aim of this research is to try to determine the appropriateness of the visuals in the primary school Turkish workbooks for the students with low visibility in terms of visual design elements. In the realization of the work, the document review method was used. In this study, purposive sampling method was used in the selection of…

  10. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  11. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  12. A Principled Way of Assessing Visualization Literacy.

    PubMed

    Boy, Jeremy; Rensink, Ronald A; Bertini, Enrico; Fekete, Jean-Daniel

    2014-12-01

    We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.

  13. An adaptive block-based fusion method with LUE-SSIM for multi-focus images

    NASA Astrophysics Data System (ADS)

    Zheng, Jianing; Guo, Yongcai; Huang, Yukun

    2016-09-01

    Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.

  14. Investigating Surface and Near-Surface Bushfire Fuel Attributes: A Comparison between Visual Assessments and Image-Based Point Clouds.

    PubMed

    Spits, Christine; Wallace, Luke; Reinke, Karin

    2017-04-20

    Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential.

  15. A novel genome signature based on inter-nucleotide distances profiles for visualization of metagenomic data

    NASA Astrophysics Data System (ADS)

    Xie, Xian-Hua; Yu, Zu-Guo; Ma, Yuan-Lin; Han, Guo-Sheng; Anh, Vo

    2017-09-01

    There has been a growing interest in visualization of metagenomic data. The present study focuses on the visualization of metagenomic data using inter-nucleotide distances profile. We first convert the fragment sequences into inter-nucleotide distances profiles. Then we analyze these profiles by principal component analysis. Finally the principal components are used to obtain the 2-D scattered plot according to their source of species. We name our method as inter-nucleotide distances profiles (INP) method. Our method is evaluated on three benchmark data sets used in previous published papers. Our results demonstrate that the INP method is good, alternative and efficient for visualization of metagenomic data.

  16. Sampling methods, dispersion patterns, and fixed precision sequential sampling plans for western flower thrips (Thysanoptera: Thripidae) and cotton fleahoppers (Hemiptera: Miridae) in cotton.

    PubMed

    Parajulee, M N; Shrestha, R B; Leser, J F

    2006-04-01

    A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.

  17. Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity.

    PubMed

    Napoletano, Paolo; Piccoli, Flavio; Schettini, Raimondo

    2018-01-12

    Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art.

  18. Accuracy and efficiency of computer-aided anatomical analysis using 3D visualization software based on semi-automated and automated segmentations.

    PubMed

    An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang

    2017-03-01

    We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.

  19. Visual detection of glial cell line-derived neurotrophic factor based on a molecular translator and isothermal strand-displacement polymerization reaction

    PubMed Central

    Zhang, Li-Yong; Xing, Tao; Du, Li-Xin; Li, Qing-Min; Liu, Wei-Dong; Wang, Ji-Yue; Cai, Jing

    2015-01-01

    Background Glial cell line-derived neurotrophic factor (GDNF) is a small protein that potently promotes the survival of many types of neurons. Detection of GDNF is vital to monitoring the survival of sympathetic and sensory neurons. However, the specific method for GDNF detection is also un-discovered. The purpose of this study is to explore the method for protein detection of GDNF. Methods A novel visual detection method based on a molecular translator and isothermal strand-displacement polymerization reaction (ISDPR) has been proposed for the detection of GDNF. In this study, a molecular translator was employed to convert the input protein to output deoxyribonucleic acid signal, which was further amplified by ISDPR. The product of ISDPR was detected by a lateral flow biosensor within 30 minutes. Results This novel visual detection method based on a molecular translator and ISDPR has very high sensitivity and selectivity, with a dynamic response ranging from 1 pg/mL to 10 ng/mL, and the detection limit was 1 pg/mL of GDNF. Conclusion This novel visual detection method exhibits high sensitivity and selectivity, which is very simple and universal for GDNF detection to help disease therapy in clinical practice. PMID:25848224

  20. Two-Way Gene Interaction From Microarray Data Based on Correlation Methods.

    PubMed

    Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh

    2016-06-01

    Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman's rank correlation coefficient and Blomqvist's measure, and compared them with Pearson's correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson's correlation, Spearman's rank correlation, and Blomqvist's coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist's coefficient was not confirmed by visual methods. Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data.

  1. Research on optimal DEM cell size for 3D visualization of loess terraces

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei

    2009-10-01

    In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.

  2. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  3. Knowledge Domain and Emerging Trends in Organic Photovoltaic Technology: A Scientometric Review Based on CiteSpace Analysis.

    PubMed

    Xiao, Fengjun; Li, Chengzhi; Sun, Jiangman; Zhang, Lianjie

    2017-01-01

    To study the rapid growth of research on organic photovoltaic (OPV) technology, development trends in the relevant research are analyzed based on CiteSpace software of text mining and visualization in scientific literature. By this analytical method, the outputs and cooperation of authors, the hot research topics, the vital references and the development trend of OPV are identified and visualized. Different from the traditional review articles by the experts on OPV, this work provides a new method of visualizing information about the development of the OPV technology research over the past decade quantitatively.

  4. Knowledge Domain and Emerging trends in Organic Photovoltaic Technology: A Scientometric Review Based on CiteSpace Analysis

    NASA Astrophysics Data System (ADS)

    Xiao, Fengjun; Li, Chengzhi; Sun, Jiangman; Zhang, Lianjie

    2017-09-01

    To study the rapid growth of research on organic photovoltaic (OPV) technology, development trends in the relevant research are analyzed based on CiteSpace software of text mining and visualization in scientific literature. By this analytical method, the outputs and cooperation of authors, the hot research topics, the vital references and the development trend of OPV are identified and visualized. Different from the traditional review articles by the experts on OPV, this work provides a new method of visualizing information about the development of the OPV technology research over the past decade quantitatively.

  5. Smartphone-Based Escalator Recognition for the Visually Impaired

    PubMed Central

    Nakamura, Daiki; Takizawa, Hotaka; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    It is difficult for visually impaired individuals to recognize escalators in everyday environments. If the individuals ride on escalators in the wrong direction, they will stumble on the steps. This paper proposes a novel method to assist visually impaired individuals in finding available escalators by the use of smartphone cameras. Escalators are recognized by analyzing optical flows in video frames captured by the cameras, and auditory feedback is provided to the individuals. The proposed method was implemented on an Android smartphone and applied to actual escalator scenes. The experimental results demonstrate that the proposed method is promising for helping visually impaired individuals use escalators. PMID:28481270

  6. A Customized Transportation Intervention for Persons with Visual Impairments

    ERIC Educational Resources Information Center

    Crudden, Adele; Antonelli, Karla; O'Mally, Jamie

    2017-01-01

    Introduction: Transportation can be an employment barrier for persons with disabilities, particularly those with visual impairments. A customized transportation intervention for people with visual impairments, based on concepts associated with customized employment, was devised, implemented, and evaluated. Methods: A pretest and posttest…

  7. An optimized content-aware image retargeting method: toward expanding the perceived visual field of the high-density retinal prosthesis recipients

    NASA Astrophysics Data System (ADS)

    Li, Heng; Zeng, Yajie; Lu, Zhuofan; Cao, Xiaofei; Su, Xiaofan; Sui, Xiaohong; Wang, Jing; Chai, Xinyu

    2018-04-01

    Objective. Retinal prosthesis devices have shown great value in restoring some sight for individuals with profoundly impaired vision, but the visual acuity and visual field provided by prostheses greatly limit recipients’ visual experience. In this paper, we employ computer vision approaches to seek to expand the perceptible visual field in patients implanted potentially with a high-density retinal prosthesis while maintaining visual acuity as much as possible. Approach. We propose an optimized content-aware image retargeting method, by introducing salient object detection based on color and intensity-difference contrast, aiming to remap important information of a scene into a small visual field and preserve their original scale as much as possible. It may improve prosthetic recipients’ perceived visual field and aid in performing some visual tasks (e.g. object detection and object recognition). To verify our method, psychophysical experiments, detecting object number and recognizing objects, are conducted under simulated prosthetic vision. As control, we use three other image retargeting techniques, including Cropping, Scaling, and seam-assisted shrinkability. Main results. Results show that our method outperforms in preserving more key features and has significantly higher recognition accuracy in comparison with other three image retargeting methods under the condition of small visual field and low-resolution. Significance. The proposed method is beneficial to expand the perceived visual field of prosthesis recipients and improve their object detection and recognition performance. It suggests that our method may provide an effective option for image processing module in future high-density retinal implants.

  8. Real-time biscuit tile image segmentation method based on edge detection.

    PubMed

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. A Method to Quantify Visual Information Processing in Children Using Eye Tracking

    PubMed Central

    Kooiker, Marlou J.G.; Pel, Johan J.M.; van der Steen-Kant, Sanny P.; van der Steen, Johannes

    2016-01-01

    Visual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children under the age of 4 years and in children with intellectual disabilities. Here we describe a quantitative method that overcomes these problems. The method uses a remote eye tracker and a four choice preferential looking paradigm to measure eye movement responses to different visual stimuli. The child sits without head support in front of a monitor with integrated infrared cameras. In one of four monitor quadrants a visual stimulus is presented. Each stimulus has a specific visual modality with respect to the background, e.g., form, motion, contrast or color. From the reflexive eye movement responses to these specific visual modalities, output parameters such as reaction times, fixation accuracy and fixation duration are calculated to quantify a child's viewing behavior. With this approach, the quality of visual information processing can be assessed without the use of communication. By comparing results with reference values obtained in typically developing children from 0-12 years, the method provides a characterization of visual information processing in visually impaired children. The quantitative information provided by this method can be advantageous for the field of clinical visual assessment and rehabilitation in multiple ways. The parameter values provide a good basis to: (i) characterize early visual capacities and consequently to enable early interventions; (ii) compare risk groups and follow visual development over time; and (iii), construct an individual visual profile for each child. PMID:27500922

  10. A Method to Quantify Visual Information Processing in Children Using Eye Tracking.

    PubMed

    Kooiker, Marlou J G; Pel, Johan J M; van der Steen-Kant, Sanny P; van der Steen, Johannes

    2016-07-09

    Visual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children under the age of 4 years and in children with intellectual disabilities. Here we describe a quantitative method that overcomes these problems. The method uses a remote eye tracker and a four choice preferential looking paradigm to measure eye movement responses to different visual stimuli. The child sits without head support in front of a monitor with integrated infrared cameras. In one of four monitor quadrants a visual stimulus is presented. Each stimulus has a specific visual modality with respect to the background, e.g., form, motion, contrast or color. From the reflexive eye movement responses to these specific visual modalities, output parameters such as reaction times, fixation accuracy and fixation duration are calculated to quantify a child's viewing behavior. With this approach, the quality of visual information processing can be assessed without the use of communication. By comparing results with reference values obtained in typically developing children from 0-12 years, the method provides a characterization of visual information processing in visually impaired children. The quantitative information provided by this method can be advantageous for the field of clinical visual assessment and rehabilitation in multiple ways. The parameter values provide a good basis to: (i) characterize early visual capacities and consequently to enable early interventions; (ii) compare risk groups and follow visual development over time; and (iii), construct an individual visual profile for each child.

  11. Two-Way Gene Interaction From Microarray Data Based on Correlation Methods

    PubMed Central

    Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh

    2016-01-01

    Background Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. Objectives The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. Materials and Methods In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman’s rank correlation coefficient and Blomqvist’s measure, and compared them with Pearson’s correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson’s correlation, Spearman’s rank correlation, and Blomqvist’s coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Results Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist’s coefficient was not confirmed by visual methods. Conclusions Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data. PMID:27621916

  12. A method for improved visual landscape compatibility of mobile home park

    Treesearch

    Daniel R. Jones

    1979-01-01

    This paper is a description of a research effort directed to improving the visual image of mobile home parks in the landscape. The study is an application of existing methodologies for measuring scenic quality and visual landscape compatibility to an unsolved problem. The paper summarizes two major areas of investigation: regional location factors based on visual...

  13. Laser Pencil Beam Based Techniques for Visualization and Analysis of Interfaces Between Media

    NASA Technical Reports Server (NTRS)

    Adamovsky, Grigory; Giles, Sammie, Jr.

    1998-01-01

    Traditional optical methods that include interferometry, Schlieren, and shadowgraphy have been used successfully for visualization and evaluation of various media. Aerodynamics and hydrodynamics are major fields where these methods have been applied. However, these methods have such major drawbacks as a relatively low power density and suppression of the secondary order phenomena. A novel method introduced at NASA Lewis Research Center minimizes disadvantages of the 'classical' methods. The method involves a narrow pencil-like beam that penetrates a medium of interest. The paper describes the laser pencil beam flow visualization methods in detail. Various system configurations are presented. The paper also discusses interfaces between media in general terms and provides examples of interfaces.

  14. A GPU-based mipmapping method for water surface visualization

    NASA Astrophysics Data System (ADS)

    Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan

    2018-03-01

    Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.

  15. Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.

    PubMed

    Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil

    2017-01-19

    Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.

  16. Data visualisation in surveillance for injury prevention and control: conceptual bases and case studies

    PubMed Central

    Martinez, Ramon; Ordunez, Pedro; Soliz, Patricia N; Ballesteros, Michael F

    2016-01-01

    Background The complexity of current injury-related health issues demands the usage of diverse and massive data sets for comprehensive analyses, and application of novel methods to communicate data effectively to the public health community, decision-makers and the public. Recent advances in information visualisation, availability of new visual analytic methods and tools, and progress on information technology provide an opportunity for shaping the next generation of injury surveillance. Objective To introduce data visualisation conceptual bases, and propose a visual analytic and visualisation platform in public health surveillance for injury prevention and control. Methods The paper introduces data visualisation conceptual bases, describes a visual analytic and visualisation platform, and presents two real-world case studies illustrating their application in public health surveillance for injury prevention and control. Results Application of visual analytic and visualisation platform is presented as solution for improved access to heterogeneous data sources, enhance data exploration and analysis, communicate data effectively, and support decision-making. Conclusions Applications of data visualisation concepts and visual analytic platform could play a key role to shape the next generation of injury surveillance. Visual analytic and visualisation platform could improve data use, the analytic capacity, and ability to effectively communicate findings and key messages. The public health surveillance community is encouraged to identify opportunities to develop and expand its use in injury prevention and control. PMID:26728006

  17. Visual Sensing for Urban Flood Monitoring

    PubMed Central

    Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han

    2015-01-01

    With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201

  18. Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity

    PubMed Central

    Schettini, Raimondo

    2018-01-01

    Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art. PMID:29329268

  19. A Novel Robot Visual Homing Method Based on SIFT Features

    PubMed Central

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-01-01

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method. PMID:26473880

  20. Texture characterization for joint compression and classification based on human perception in the wavelet domain.

    PubMed

    Fahmy, Gamal; Black, John; Panchanathan, Sethuraman

    2006-06-01

    Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.

  1. Halftone visual cryptography.

    PubMed

    Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni

    2006-08-01

    Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.

  2. Investigating Surface and Near-Surface Bushfire Fuel Attributes: A Comparison between Visual Assessments and Image-Based Point Clouds

    PubMed Central

    Spits, Christine; Wallace, Luke; Reinke, Karin

    2017-01-01

    Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential. PMID:28425957

  3. Web-based Visual Analytics for Extreme Scale Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Evans, Katherine J; Harney, John F

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less

  4. Method for the reduction of image content redundancy in large image databases

    DOEpatents

    Tobin, Kenneth William; Karnowski, Thomas P.

    2010-03-02

    A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.

  5. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.

  6. Utterance independent bimodal emotion recognition in spontaneous communication

    NASA Astrophysics Data System (ADS)

    Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng

    2011-12-01

    Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.

  7. Automatic face recognition in HDR imaging

    NASA Astrophysics Data System (ADS)

    Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.

    2014-05-01

    The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.

  8. CLFs-based optimization control for a class of constrained visual servoing systems.

    PubMed

    Song, Xiulan; Miaomiao, Fu

    2017-03-01

    In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  10. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  11. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    PubMed

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  12. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  13. Computer systems and methods for visualizing data

    DOEpatents

    Stolte, Chris; Hanrahan, Patrick

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  14. Computer systems and methods for visualizing data

    DOEpatents

    Stolte, Chris; Hanrahan, Patrick

    2013-01-29

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  15. A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure.

    PubMed

    Zhou, Yang; Wu, Dewei

    2016-01-01

    It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT).

  16. A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure

    PubMed Central

    2016-01-01

    It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859

  17. Visual Contrast Enhancement Algorithm Based on Histogram Equalization

    PubMed Central

    Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching

    2015-01-01

    Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219

  18. Using Embedded Visual Coding to Support Contextualization of Historical Texts

    ERIC Educational Resources Information Center

    Baron, Christine

    2016-01-01

    This mixed-method study examines the think-aloud protocols of 48 randomly assigned undergraduate students to understand what effect embedding a visual coding system, based on reliable visual cues for establishing historical time period, would have on novice history students' ability to contextualize historic documents. Results indicate that using…

  19. Conducting a wildland visual resources inventory

    Treesearch

    James F. Palmer

    1979-01-01

    This paper describes a procedure for systematically inventorying the visual resources of wildland environments. Visual attributes are recorded photographically using two separate sampling methods: one based on professional judgment and the other on random selection. The location and description of each inventoried scene are recorded on U.S. Geological Survey...

  20. Visualization of 3D CT-based anatomical models

    NASA Astrophysics Data System (ADS)

    Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.

    2018-04-01

    Biomedical volumetric data visualization techniques for the exploration purposes are well developed. Most of the known methods are inappropriate for surgery simulation systems due to lack of realism. A segmented data visualization is a well-known approach for the visualization of the structured volumetric data. The research is focused on improvement of the segmented data visualization technique by the aliasing problems resolution and the use of material transparency modeling for better semitransparent structures rendering.

  1. An RBF-based compression method for image-based relighting.

    PubMed

    Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung

    2006-04-01

    In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.

  2. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    PubMed Central

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  3. Modeling human comprehension of data visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie

    This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less

  4. Pedestrian visual recommendation in Kertanegara - Semeru corridor in Malang City

    NASA Astrophysics Data System (ADS)

    Cosalia, V. B.

    2017-06-01

    Streetscape could be the first impression to see an urban area. One of the streerscape that should be attended to it is corridor of Jl. Kertanegara - Semeru since at that corridor is the road corridor having the strong caracter also as the one of the main axes in Malang city. This research is aim knowing the visual quality also the exact structuring rcommendation for Jl. Kertanegara - Semeru based on pedestrian’s visual. The methode used to this research is Scenic Beauty Estimation (SBE) and used historic study. There is several variables used, they are scale space, visual flexibility, beauty, emphasis, balance and dominant. Based on those variable the pedestrians as a respondent doing the assessment. Based on the result of SBE have been done, it is showed that the visual quality in Corridor Kertanegara Semeru is well enough since the result showed that there are 10 photos in low visual quality in Jl. Semeru and 14 photos in high visual quality in Jl. Kertanegara, Jl. Tugu dan Jl. Kahuripan. By the historic study and based on high visual quality reference doing the structuring recommendation in part of landscape having the low visual quality.

  5. Making data matter: Voxel printing for the digital fabrication of data across scales and domains.

    PubMed

    Bader, Christoph; Kolb, Dominik; Weaver, James C; Sharma, Sunanda; Hosny, Ahmed; Costa, João; Oxman, Neri

    2018-05-01

    We present a multimaterial voxel-printing method that enables the physical visualization of data sets commonly associated with scientific imaging. Leveraging voxel-based control of multimaterial three-dimensional (3D) printing, our method enables additive manufacturing of discontinuous data types such as point cloud data, curve and graph data, image-based data, and volumetric data. By converting data sets into dithered material deposition descriptions, through modifications to rasterization processes, we demonstrate that data sets frequently visualized on screen can be converted into physical, materially heterogeneous objects. Our approach alleviates the need to postprocess data sets to boundary representations, preventing alteration of data and loss of information in the produced physicalizations. Therefore, it bridges the gap between digital information representation and physical material composition. We evaluate the visual characteristics and features of our method, assess its relevance and applicability in the production of physical visualizations, and detail the conversion of data sets for multimaterial 3D printing. We conclude with exemplary 3D-printed data sets produced by our method pointing toward potential applications across scales, disciplines, and problem domains.

  6. Suggested Interactivity: Seeking Perceived Affordances for Information Visualization.

    PubMed

    Boy, Jeremy; Eveillard, Louis; Detienne, Françoise; Fekete, Jean-Daniel

    2016-01-01

    In this article, we investigate methods for suggesting the interactivity of online visualizations embedded with text. We first assess the need for such methods by conducting three initial experiments on Amazon's Mechanical Turk. We then present a design space for Suggested Interactivity (i. e., visual cues used as perceived affordances-SI), based on a survey of 382 HTML5 and visualization websites. Finally, we assess the effectiveness of three SI cues we designed for suggesting the interactivity of bar charts embedded with text. Our results show that only one cue (SI3) was successful in inciting participants to interact with the visualizations, and we hypothesize this is because this particular cue provided feedforward.

  7. Dual-Tree Complex Wavelet Transform and Image Block Residual-Based Multi-Focus Image Fusion in Visual Sensor Networks

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878

  8. Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks.

    PubMed

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-11-26

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.

  9. "Seeing" into the Past and "Looking" Forward to the Future: Visual Methods and Gender and Education Research

    ERIC Educational Resources Information Center

    Allan, Alexandra; Tinkler, Penny

    2015-01-01

    A small number of attempts have been made to take stock of the field of gender and education, though very few have taken methodology as their explicit focus. We seek to stimulate such discussion in this article by taking stock of the use of visual methods in gender and education research (particularly participatory and image-based methods). We…

  10. Flow Charts: Visualization of Vector Fields on Arbitrary Surfaces

    PubMed Central

    Li, Guo-Shi; Tricoche, Xavier; Weiskopf, Daniel; Hansen, Charles

    2009-01-01

    We introduce a novel flow visualization method called Flow Charts, which uses a texture atlas approach for the visualization of flows defined over curved surfaces. In this scheme, the surface and its associated flow are segmented into overlapping patches, which are then parameterized and packed in the texture domain. This scheme allows accurate particle advection across multiple charts in the texture domain, providing a flexible framework that supports various flow visualization techniques. The use of surface parameterization enables flow visualization techniques requiring the global view of the surface over long time spans, such as Unsteady Flow LIC (UFLIC), particle-based Unsteady Flow Advection Convolution (UFAC), or dye advection. It also prevents visual artifacts normally associated with view-dependent methods. Represented as textures, Flow Charts can be naturally integrated into hardware accelerated flow visualization techniques for interactive performance. PMID:18599918

  11. PathFinder: reconstruction and dynamic visualization of metabolic pathways.

    PubMed

    Goesmann, Alexander; Haubrock, Martin; Meyer, Folker; Kalinowski, Jörn; Giegerich, Robert

    2002-01-01

    Beyond methods for a gene-wise annotation and analysis of sequenced genomes new automated methods for functional analysis on a higher level are needed. The identification of realized metabolic pathways provides valuable information on gene expression and regulation. Detection of incomplete pathways helps to improve a constantly evolving genome annotation or discover alternative biochemical pathways. To utilize automated genome analysis on the level of metabolic pathways new methods for the dynamic representation and visualization of pathways are needed. PathFinder is a tool for the dynamic visualization of metabolic pathways based on annotation data. Pathways are represented as directed acyclic graphs, graph layout algorithms accomplish the dynamic drawing and visualization of the metabolic maps. A more detailed analysis of the input data on the level of biochemical pathways helps to identify genes and detect improper parts of annotations. As an Relational Database Management System (RDBMS) based internet application PathFinder reads a list of EC-numbers or a given annotation in EMBL- or Genbank-format and dynamically generates pathway graphs.

  12. Inferring Interaction Force from Visual Information without Using Physical Force Sensors.

    PubMed

    Hwang, Wonjun; Lim, Soo-Chul

    2017-10-26

    In this paper, we present an interaction force estimation method that uses visual information rather than that of a force sensor. Specifically, we propose a novel deep learning-based method utilizing only sequential images for estimating the interaction force against a target object, where the shape of the object is changed by an external force. The force applied to the target can be estimated by means of the visual shape changes. However, the shape differences in the images are not very clear. To address this problem, we formulate a recurrent neural network-based deep model with fully-connected layers, which models complex temporal dynamics from the visual representations. Extensive evaluations show that the proposed learning models successfully estimate the interaction forces using only the corresponding sequential images, in particular in the case of three objects made of different materials, a sponge, a PET bottle, a human arm, and a tube. The forces predicted by the proposed method are very similar to those measured by force sensors.

  13. Visualization of DNA in highly processed botanical materials.

    PubMed

    Lu, Zhengfei; Rubinsky, Maria; Babajanian, Silva; Zhang, Yanjun; Chang, Peter; Swanson, Gary

    2018-04-15

    DNA-based methods have been gaining recognition as a tool for botanical authentication in herbal medicine; however, their application in processed botanical materials is challenging due to the low quality and quantity of DNA left after extensive manufacturing processes. The low amount of DNA recovered from processed materials, especially extracts, is "invisible" by current technology, which has casted doubt on the presence of amplifiable botanical DNA. A method using adapter-ligation and PCR amplification was successfully applied to visualize the "invisible" DNA in botanical extracts. The size of the "invisible" DNA fragments in botanical extracts was around 20-220 bp compared to fragments of around 600 bp for the more easily visualized DNA in botanical powders. This technique is the first to allow characterization and visualization of small fragments of DNA in processed botanical materials and will provide key information to guide the development of appropriate DNA-based botanical authentication methods in the future. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.

    PubMed

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2014-01-01

    Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.

  15. Flow visualization V; Proceedings of the 5th International Symposium, Prague, Czechoslovakia, Aug. 21-25, 1989

    NASA Astrophysics Data System (ADS)

    Reznicek, R.

    The present conference on flow visualization encompasses methods exploiting tracing particles, surface tracing methods, methods exploiting the effects of streaming fluid on passing radiation/field, computer-aided flow visualization, and applications to fluid mechanics, aerodynamics, flow devices, shock tubes, and heat/mass transfer. Specific issues include visualizing velocity distribution by stereo photography, dark-field Fourier quasiinterferometry, speckle tomography of an open flame, a fast eye for real-time image analysis, and velocity-field determination based on flow-image analysis. Also addressed are flows around rectangular prisms with oscillating flaps at the leading edges, the tomography of aerodynamic objects, the vapor-screen technique applied to a delta-wing aircraft, flash-lamp planar imaging, IR-thermography applications in convective heat transfer, and the visualization of marangoni effects in evaporating sessile drops.

  16. Research on flight stability performance of rotor aircraft based on visual servo control method

    NASA Astrophysics Data System (ADS)

    Yu, Yanan; Chen, Jing

    2016-11-01

    control method based on visual servo feedback is proposed, which is used to improve the attitude of a quad-rotor aircraft and to enhance its flight stability. Ground target images are obtained by a visual platform fixed on aircraft. Scale invariant feature transform (SIFT) algorism is used to extract image feature information. According to the image characteristic analysis, fast motion estimation is completed and used as an input signal of PID flight control system to realize real-time status adjustment in flight process. Imaging tests and simulation results show that the method proposed acts good performance in terms of flight stability compensation and attitude adjustment. The response speed and control precision meets the requirements of actual use, which is able to reduce or even eliminate the influence of environmental disturbance. So the method proposed has certain research value to solve the problem of aircraft's anti-disturbance.

  17. Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.

    PubMed

    Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi

    2017-07-01

    We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos.

    PubMed

    Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.

  19. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

    PubMed Central

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421

  20. An Approach for the Visualization of Temperature Distribution in Tissues According to Changes in Ultrasonic Backscattered Energy

    PubMed Central

    Li, Qiang; Liu, Hao-Li; Chen, Wen-Shiang

    2013-01-01

    Previous studies developed ultrasound temperature-imaging methods based on changes in backscattered energy (CBE) to monitor variations in temperature during hyperthermia. In conventional CBE imaging, tracking and compensation of the echo shift due to temperature increase need to be done. Moreover, the CBE image does not enable visualization of the temperature distribution in tissues during nonuniform heating, which limits its clinical application in guidance of tissue ablation treatment. In this study, we investigated a CBE imaging method based on the sliding window technique and the polynomial approximation of the integrated CBE (ICBEpa image) to overcome the difficulties of conventional CBE imaging. We conducted experiments with tissue samples of pork tenderloin ablated by microwave irradiation to validate the feasibility of the proposed method. During ablation, the raw backscattered signals were acquired using an ultrasound scanner for B-mode and ICBEpa imaging. The experimental results showed that the proposed ICBEpa image can visualize the temperature distribution in a tissue with a very good contrast. Moreover, tracking and compensation of the echo shift were not necessary when using the ICBEpa image to visualize the temperature profile. The experimental findings suggested that the ICBEpa image, a new CBE imaging method, has a great potential in CBE-based imaging of hyperthermia and other thermal therapies. PMID:24260041

  1. Interactive Particle Visualization

    NASA Astrophysics Data System (ADS)

    Gribble, Christiaan P.

    Particle-based simulation methods are used to model a wide range of complex phenomena and to solve time-dependent problems of various scales. Effective visualizations of the resulting state will communicate subtle changes in the three-dimensional structure, spatial organization, and qualitative trends within a simulation as it evolves. This chapter discusses two approaches to interactive particle visualization that satisfy these goals: one targeting desktop systems equipped with programmable graphics hardware, and the other targeting moderately sized multicore systems using packet-based ray tracing.

  2. Experiences of Individuals with Visual Impairments in Integrated Physical Education: A Retrospective Study

    ERIC Educational Resources Information Center

    Haegele, Justin A.; Zhu, Xihe

    2017-01-01

    Purpose: The purpose of this retrospective study was to examine the experiences of adults with visual impairments during school-based integrated physical education (PE). Method: An interpretative phenomenological analysis (IPA) research approach was used and 16 adults (ages 21-48 years; 10 women, 6 men) with visual impairments acted as…

  3. Up-conversion of MMW radiation to visual band using glow discharge detector and silicon detector

    NASA Astrophysics Data System (ADS)

    Aharon Akram, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.

    2016-10-01

    In this work we describe and demonstrate a method for up-conversion of millimeter wave (MMW) radiation to the visual band using a very inexpensive miniature Glow Discharge Detector (GDD), and a silicon detector (photodetector). Here we present 100 GHz up-conversion images based on measuring the visual light emitting from the GDD rather than its electrical current. The results showed better response time of 480 ns and better sensitivity compared to the electronic detection that was performed in our previous work. In this work we performed MMW imaging based on this method using a GDD lamp, and a photodetector to measure GDD light emission.

  4. Physically-based in silico light sheet microscopy for visualizing fluorescent brain models

    PubMed Central

    2015-01-01

    Background We present a physically-based computational model of the light sheet fluorescence microscope (LSFM). Based on Monte Carlo ray tracing and geometric optics, our method simulates the operational aspects and image formation process of the LSFM. This simulated, in silico LSFM creates synthetic images of digital fluorescent specimens that can resemble those generated by a real LSFM, as opposed to established visualization methods producing visually-plausible images. We also propose an accurate fluorescence rendering model which takes into account the intrinsic characteristics of fluorescent dyes to simulate the light interaction with fluorescent biological specimen. Results We demonstrate first results of our visualization pipeline to a simplified brain tissue model reconstructed from the somatosensory cortex of a young rat. The modeling aspects of the LSFM units are qualitatively analysed, and the results of the fluorescence model were quantitatively validated against the fluorescence brightness equation and characteristic emission spectra of different fluorescent dyes. AMS subject classification Modelling and simulation PMID:26329404

  5. Traffic Sign Detection Based on Biologically Visual Mechanism

    NASA Astrophysics Data System (ADS)

    Hu, X.; Zhu, X.; Li, D.

    2012-07-01

    TSR (Traffic sign recognition) is an important problem in ITS (intelligent traffic system), which is being paid more and more attention for realizing drivers assisting system and unmanned vehicle etc. TSR consists of two steps: detection and recognition, and this paper describe a new traffic sign detection method. The design principle of the traffic sign is comply with the visual attention mechanism of human, so we propose a method using visual attention mechanism to detect traffic sign ,which is reasonable. In our method, the whole scene will firstly be analyzed by visual attention model to acquire the area where traffic signs might be placed. And then, these candidate areas will be analyzed according to the shape characteristics of the traffic sign to detect traffic signs. In traffic sign detection experiments, the result shows the proposed method is effectively and robust than other existing saliency detection method.

  6. A Survey of Colormaps in Visualization

    PubMed Central

    Zhou, Liang; Hansen, Charles D.

    2016-01-01

    Colormaps are a vital method for users to gain insights into data in a visualization. With a good choice of colormaps, users are able to acquire information in the data more effectively and efficiently. In this survey, we attempt to provide readers with a comprehensive review of colormap generation techniques and provide readers a taxonomy which is helpful for finding appropriate techniques to use for their data and applications. Specifically, we first briefly introduce the basics of color spaces including color appearance models. In the core of our paper, we survey colormap generation techniques, including the latest advances in the field by grouping these techniques into four classes: procedural methods, user-study based methods, rule-based methods, and data-driven methods; we also include a section on methods that are beyond pure data comprehension purposes. We then classify colormapping techniques into a taxonomy for readers to quickly identify the appropriate techniques they might use. Furthermore, a representative set of visualization techniques that explicitly discuss the use of colormaps is reviewed and classified based on the nature of the data in these applications. Our paper is also intended to be a reference of colormap choices for readers when they are faced with similar data and/or tasks. PMID:26513793

  7. The Effect of an Attachment-Based Behaviour Therapy for Children with Visual and Severe Intellectual Disabilities

    ERIC Educational Resources Information Center

    Sterkenburg, P. S.; Janssen, C. G. C.; Schuengel, C.

    2008-01-01

    Background: A combination of an attachment-based therapy and behaviour modification was investigated for children with persistent challenging behaviour. Method: Six clients with visual and severe intellectual disabilities, severe challenging behaviour and with a background of pathogenic care were treated. Challenging behaviour was recorded…

  8. A Computer-Based Program to Teach Braille Reading to Sighted Individuals

    ERIC Educational Resources Information Center

    Scheithauer, Mindy C.; Tiger, Jeffrey H.

    2012-01-01

    Instructors of the visually impaired need efficient braille-training methods. This study conducted a preliminary evaluation of a computer-based program intended to teach the relation between braille characters and English letters using a matching-to-sample format with 4 sighted college students. Each participant mastered matching visual depictions…

  9. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    PubMed

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  10. A neotropical Miocene pollen database employing image-based search and semantic modeling1

    PubMed Central

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-01-01

    • Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648

  11. Exploring social inclusion strategies for public health research and practice: The use of participatory visual methods to counter stigmas surrounding street-based substance abuse in Colombia.

    PubMed

    Ritterbusch, Amy E

    2016-01-01

    This paper presents the participatory visual research design and findings from a qualitative assessment of the social impact of bazuco and inhalant/glue consumption among street youth in Bogotá, Colombia. The paper presents the visual methodologies our participatory action research (PAR) team employed in order to identify and overcome the stigmas and discrimination that street youth experience in society and within state-sponsored drug rehabilitation programmes. I call for critical reflection regarding the broad application of the terms 'participation' and 'participatory' in visual research and urge scholars and public health practitioners to consider the transformative potential of PAR for both the research and practice of global public health in general and rehabilitation programmes for street-based substance abuse in Colombia in particular. The paper concludes with recommendations as to how participatory visual methods can be used to promote social inclusion practices and to work against stigma and discrimination in health-related research and within health institutions.

  12. A Simple Model-Based Approach to Inferring and Visualizing Cancer Mutation Signatures

    PubMed Central

    Shiraishi, Yuichi; Tremmel, Georg; Miyano, Satoru; Stephens, Matthew

    2015-01-01

    Recent advances in sequencing technologies have enabled the production of massive amounts of data on somatic mutations from cancer genomes. These data have led to the detection of characteristic patterns of somatic mutations or “mutation signatures” at an unprecedented resolution, with the potential for new insights into the causes and mechanisms of tumorigenesis. Here we present new methods for modelling, identifying and visualizing such mutation signatures. Our methods greatly simplify mutation signature models compared with existing approaches, reducing the number of parameters by orders of magnitude even while increasing the contextual factors (e.g. the number of flanking bases) that are accounted for. This improves both sensitivity and robustness of inferred signatures. We also provide a new intuitive way to visualize the signatures, analogous to the use of sequence logos to visualize transcription factor binding sites. We illustrate our new method on somatic mutation data from urothelial carcinoma of the upper urinary tract, and a larger dataset from 30 diverse cancer types. The results illustrate several important features of our methods, including the ability of our new visualization tool to clearly highlight the key features of each signature, the improved robustness of signature inferences from small sample sizes, and more detailed inference of signature characteristics such as strand biases and sequence context effects at the base two positions 5′ to the mutated site. The overall framework of our work is based on probabilistic models that are closely connected with “mixed-membership models” which are widely used in population genetic admixture analysis, and in machine learning for document clustering. We argue that recognizing these relationships should help improve understanding of mutation signature extraction problems, and suggests ways to further improve the statistical methods. Our methods are implemented in an R package pmsignature (https://github.com/friend1ws/pmsignature) and a web application available at https://friend1ws.shinyapps.io/pmsignature_shiny/. PMID:26630308

  13. Coupled binary embedding for large-scale image retrieval.

    PubMed

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.

  14. A Double Take: The Practical and Ethical Dilemmas of Teaching the Visual Method of Photo Elicitation

    ERIC Educational Resources Information Center

    Wakefield, Caroline; Watt, Sal

    2014-01-01

    This paper advocates the teaching of photo elicitation in higher education as a valuable data collection technique and draws on our experience of teaching this visual method across two consecutive postgraduate cohorts. Building on previous work (Watt & Wakefield, 2014) and based on a former concern regarding student duty of care, a…

  15. A ganglion-cell-based primary image representation method and its contribution to object recognition

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song

    2016-10-01

    A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.

  16. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  17. Focus-on-form instructional methods promote deaf college students' improvement in English grammar.

    PubMed

    Berent, Gerald P; Kelly, Ronald R; Aldersley, Stephen; Schmitz, Kathryn L; Khalsa, Baldev Kaur; Panara, John; Keenan, Susan

    2007-01-01

    Focus-on-form English teaching methods are designed to facilitate second-language learners' noticing of target language input, where "noticing" is an acquisitional prerequisite for the comprehension, processing, and eventual integration of new grammatical knowledge. While primarily designed for teaching hearing second-language learners, many focus-on-form methods lend themselves to visual presentation. This article reports the results of classroom research on the visually based implementation of focus-on-form methods with deaf college students learning English. Two of 3 groups of deaf students received focus-on-form instruction during a 10-week remedial grammar course; a third control group received grammatical instruction that did not involve focus-on-form methods. The 2 experimental groups exhibited significantly greater improvement in English grammatical knowledge relative to the control group. These results validate the efficacy of visually based focus-on-form English instruction for deaf students of English and set the stage for the continual search for innovative and effective English teaching methodologies.

  18. TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.

    PubMed

    Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas

    2017-01-01

    Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.

  19. Gradient-based interpolation method for division-of-focal-plane polarimeters.

    PubMed

    Gao, Shengkui; Gruev, Viktor

    2013-01-14

    Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.

  20. Web-Based Assessment of Visual and Visuospatial Symptoms in Parkinson's Disease

    PubMed Central

    Amick, Melissa M.; Miller, Ivy N.; Neargarder, Sandy; Cronin-Golomb, Alice

    2012-01-01

    Visual and visuospatial dysfunction is prevalent in Parkinson's disease (PD). To promote assessment of these often overlooked symptoms, we adapted the PD Vision Questionnaire for Internet administration. The questionnaire evaluates visual and visuospatial symptoms, impairments in activities of daily living (ADLs), and motor symptoms. PD participants of mild to moderate motor severity (n = 24) and healthy control participants (HC, n = 23) completed the questionnaire in paper and web-based formats. Reliability was assessed by comparing responses across formats. Construct validity was evaluated by reference to performance on measures of vision, visuospatial cognition, ADLs, and motor symptoms. The web-based format showed excellent reliability with respect to the paper format for both groups (all P′s < 0.001; HC completing the visual and visuospatial section only). Demonstrating the construct validity of the web-based questionnaire, self-rated ADL and visual and visuospatial functioning were significantly associated with performance on objective measures of these abilities (all P′s < 0.01). The findings indicate that web-based administration may be a reliable and valid method of assessing visual and visuospatial and ADL functioning in PD. PMID:22530162

  1. A visual model for object detection based on active contours and level-set method.

    PubMed

    Satoh, Shunji

    2006-09-01

    A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure-ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.

  2. Visual-area coding technique (VACT): optical parallel implementation of fuzzy logic and its visualization with the digital-halftoning process

    NASA Astrophysics Data System (ADS)

    Konishi, Tsuyoshi; Tanida, Jun; Ichioka, Yoshiki

    1995-06-01

    A novel technique, the visual-area coding technique (VACT), for the optical implementation of fuzzy logic with the capability of visualization of the results is presented. This technique is based on the microfont method and is considered to be an instance of digitized analog optical computing. Huge amounts of data can be processed in fuzzy logic with the VACT. In addition, real-time visualization of the processed result can be accomplished.

  3. Empirical Analysis of the Subjective Impressions and Objective Measures of Domain Scientists’ Visual Analytic Judgments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik

    2017-05-08

    Scientists often use specific data analysis and presentation methods familiar within their domain. But does high familiarity drive better analytical judgment? This question is especially relevant when familiar methods themselves can have shortcomings: many visualizations used conventionally for scientific data analysis and presentation do not follow established best practices. This necessitates new methods that might be unfamiliar yet prove to be more effective. But there is little empirical understanding of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their visual analytic judgments. To address this gap and to study these factors, we focusmore » on visualizations used for comparison of climate model performance. We report on a comprehensive survey-based user study with 47 climate scientists and present an analysis of : i) relationships among scientists’ familiarity, their perceived lev- els of comfort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less

  4. Annotation Graphs: A Graph-Based Visualization for Meta-Analysis of Data Based on User-Authored Annotations.

    PubMed

    Zhao, Jian; Glueck, Michael; Breslav, Simon; Chevalier, Fanny; Khan, Azam

    2017-01-01

    User-authored annotations of data can support analysts in the activity of hypothesis generation and sensemaking, where it is not only critical to document key observations, but also to communicate insights between analysts. We present annotation graphs, a dynamic graph visualization that enables meta-analysis of data based on user-authored annotations. The annotation graph topology encodes annotation semantics, which describe the content of and relations between data selections, comments, and tags. We present a mixed-initiative approach to graph layout that integrates an analyst's manual manipulations with an automatic method based on similarity inferred from the annotation semantics. Various visual graph layout styles reveal different perspectives on the annotation semantics. Annotation graphs are implemented within C8, a system that supports authoring annotations during exploratory analysis of a dataset. We apply principles of Exploratory Sequential Data Analysis (ESDA) in designing C8, and further link these to an existing task typology in the visualization literature. We develop and evaluate the system through an iterative user-centered design process with three experts, situated in the domain of analyzing HCI experiment data. The results suggest that annotation graphs are effective as a method of visually extending user-authored annotations to data meta-analysis for discovery and organization of ideas.

  5. Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levnajić, Zoran; Department of Mechanical Engineering, University of California Santa Barbara, Santa Barbara, California 93106; Mezić, Igor

    We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone,more » and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.« less

  6. Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets.

    PubMed

    Levnajić, Zoran; Mezić, Igor

    2015-05-01

    We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone, and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.

  7. Microelectrode Recordings Validate the Clinical Visualization of Subthalamic-Nucleus Based on 7T Magnetic Resonance Imaging and Machine Learning for Deep Brain Stimulation Surgery.

    PubMed

    Shamir, Reuben R; Duchin, Yuval; Kim, Jinyoung; Patriat, Remi; Marmor, Odeya; Bergman, Hagai; Vitek, Jerrold L; Sapiro, Guillermo; Bick, Atira; Eliahou, Ruth; Eitan, Renana; Israel, Zvi; Harel, Noam

    2018-05-24

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a proven and effective therapy for the management of the motor symptoms of Parkinson's disease (PD). While accurate positioning of the stimulating electrode is critical for success of this therapy, precise identification of the STN based on imaging can be challenging. We developed a method to accurately visualize the STN on a standard clinical magnetic resonance imaging (MRI). The method incorporates a database of 7-Tesla (T) MRIs of PD patients together with machine-learning methods (hereafter 7 T-ML). To validate the clinical application accuracy of the 7 T-ML method by comparing it with identification of the STN based on intraoperative microelectrode recordings. Sixteen PD patients who underwent microelectrode-recordings guided STN DBS were included in this study (30 implanted leads and electrode trajectories). The length of the STN along the electrode trajectory and the position of its contacts to dorsal, inside, or ventral to the STN were compared using microelectrode-recordings and the 7 T-ML method computed based on the patient's clinical 3T MRI. All 30 electrode trajectories that intersected the STN based on microelectrode-recordings, also intersected it when visualized with the 7 T-ML method. STN trajectory average length was 6.2 ± 0.7 mm based on microelectrode recordings and 5.8 ± 0.9 mm for the 7 T-ML method. We observed a 93% agreement regarding contact location between the microelectrode-recordings and the 7 T-ML method. The 7 T-ML method is highly consistent with microelectrode-recordings data. This method provides a reliable and accurate patient-specific prediction for targeting the STN.

  8. Fused methods for visual saliency estimation

    NASA Astrophysics Data System (ADS)

    Danko, Amanda S.; Lyu, Siwei

    2015-02-01

    In this work, we present a new model of visual saliency by combing results from existing methods, improving upon their performance and accuracy. By fusing pre-attentive and context-aware methods, we highlight the abilities of state-of-the-art models while compensating for their deficiencies. We put this theory to the test in a series of experiments, comparatively evaluating the visual saliency maps and employing them for content-based image retrieval and thumbnail generation. We find that on average our model yields definitive improvements upon recall and f-measure metrics with comparable precisions. In addition, we find that all image searches using our fused method return more correct images and additionally rank them higher than the searches using the original methods alone.

  9. Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.

    PubMed

    André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2011-01-01

    Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.

  10. Visual Thinking Routines: A Mixed Methods Approach Applied to Student Teachers at the American University in Dubai

    ERIC Educational Resources Information Center

    Gholam, Alain

    2017-01-01

    Visual thinking routines are principles based on several theories, approaches, and strategies. Such routines promote thinking skills, call for collaboration and sharing of ideas, and above all, make thinking and learning visible. Visual thinking routines were implemented in the teaching methodology graduate course at the American University in…

  11. A Virtual World of Visualization

    NASA Technical Reports Server (NTRS)

    1998-01-01

    In 1990, Sterling Software, Inc., developed the Flow Analysis Software Toolkit (FAST) for NASA Ames on contract. FAST is a workstation based modular analysis and visualization tool. It is used to visualize and animate grids and grid oriented data, typically generated by finite difference, finite element and other analytical methods. FAST is now available through COSMIC, NASA's software storehouse.

  12. Visualization of pass-by noise by means of moving frame acoustic holography.

    PubMed

    Park, S H; Kim, Y H

    2001-11-01

    The noise generated by pass-by test (ISO 362) was visualized. The moving frame acoustic holography was improved to visualize the pass-by noise and predict its level. The proposed method allowed us to visualize tire and engine noise generated by pass-by test based on the following assumption; the noise can be assumed to be quasistationary. This is first because the speed change during the period of our interest is negligible and second because the frequency change of the noise is also negligible. The proposed method was verified by a controlled loud speaker experiment. Effects of running condition, e.g., accelerating according to ISO 362, cruising at constant speed, and coasting down, on the radiated noise were also visualized. The visualized results show where the tire noise is generated and how it propagates.

  13. Web-based visual analysis for high-throughput genomics

    PubMed Central

    2013-01-01

    Background Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. Results We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Conclusions Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput genomics experiments. PMID:23758618

  14. Research on Visual Analysis Methods of Terrorism Events

    NASA Astrophysics Data System (ADS)

    Guo, Wenyue; Liu, Haiyan; Yu, Anzhu; Li, Jing

    2016-06-01

    Under the situation that terrorism events occur more and more frequency throughout the world, improving the response capability of social security incidents has become an important aspect to test governments govern ability. Visual analysis has become an important method of event analysing for its advantage of intuitive and effective. To analyse events' spatio-temporal distribution characteristics, correlations among event items and the development trend, terrorism event's spatio-temporal characteristics are discussed. Suitable event data table structure based on "5W" theory is designed. Then, six types of visual analysis are purposed, and how to use thematic map and statistical charts to realize visual analysis on terrorism events is studied. Finally, experiments have been carried out by using the data provided by Global Terrorism Database, and the results of experiments proves the availability of the methods.

  15. Fostering Creative Thinking Within the U.S. Army Command and General Staff Officers’ Course Curriculum

    DTIC Science & Technology

    2016-06-10

    and complexity to their learning” that is not present in traditional teaching methods (James and Brookfield 2014, 4). In Engaging Imagination... method described is the use of visually based teaching and learning. James and Brookfield, delineate between looking and seeing (James and Brookfield...learning methods more applicable to some students as opposed to others. However, the exploration of visual teaching techniques through the use of pictures

  16. Visual Reading Method for Detection of Bacterial Tannase

    PubMed Central

    Osawa, R.; Walsh, T. P.

    1993-01-01

    Tannase activity of bacteria capable of degrading tannin-protein complexes was determined by a newly developed visual reading method. The method is based on two phenomena: (i) the ability of tannase to hydrolyze methyl gallate to release free gallic acid and (ii) the green to brown coloration of gallic acid after prolonged exposure to oxygen in an alkaline condition. The method has been successfully used to detect the presence of tannase in the cultures of bacteria capable of degrading tannin-protein complexes. PMID:16348918

  17. Evaluation of four novel isothermal amplification assays towards simple and rapid genotyping of chloroquine resistant Plasmodium falciparum.

    PubMed

    Chahar, Madhvi; Anvikar, Anup; Dixit, Rajnikant; Valecha, Neena

    2018-07-01

    Loop mediated isothermal amplification (LAMP) assay is sensitive, prompt, high throughput and field deployable technique for nucleic acid amplification under isothermal conditions. In this study, we have developed and optimized four different visualization methods of loop-mediated isothermal amplification (LAMP) assay to detect Pfcrt K76T mutants of P. falciparum and compared their important features for one-pot in-field applications. Even though all the four tested LAMP methods could successfully detect K76T mutants of P. falciparum, however considering the time, safety, sensitivity, cost and simplicity, the malachite green and HNB based methods were found more efficient. Among four different visual dyes uses to detect LAMP products accurately, hydroxynaphthol blue and malachite green could produce long stable color change and brightness in a close tube-based approach to prevent cross-contamination risk. Our results indicated that the LAMP offers an interesting novel and convenient best method for the rapid, sensitive, cost-effective, and fairly user friendly tool for detection of K76T mutants of P. falciparum and therefore presents an alternative to PCR-based assays. Based on our comparative analysis, better field based LAMP visualization method can be chosen easily for the monitoring of other important drug targets (Kelch13 propeller region). Copyright © 2018 Elsevier Inc. All rights reserved.

  18. DNA Data Visualization (DDV): Software for Generating Web-Based Interfaces Supporting Navigation and Analysis of DNA Sequence Data of Entire Genomes.

    PubMed

    Neugebauer, Tomasz; Bordeleau, Eric; Burrus, Vincent; Brzezinski, Ryszard

    2015-01-01

    Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.

  19. Visually scoring hirsutism

    PubMed Central

    Yildiz, Bulent O.; Bolour, Sheila; Woods, Keslie; Moore, April; Azziz, Ricardo

    2010-01-01

    BACKGROUND Hirsutism is the presence of excess body or facial terminal (coarse) hair growth in females in a male-like pattern, affects 5–15% of women, and is an important sign of underlying androgen excess. Different methods are available for the assessment of hair growth in women. METHODS We conducted a literature search and analyzed the published studies that reported methods for the assessment of hair growth. We review the basic physiology of hair growth, the development of methods for visually quantifying hair growth, the comparison of these methods with objective measurements of hair growth, how hirsutism may be defined using a visual scoring method, the influence of race and ethnicity on hirsutism, and the impact of hirsutism in diagnosing androgen excess and polycystic ovary syndrome. RESULTS Objective methods for the assessment of hair growth including photographic evaluations and microscopic measurements are available but these techniques have limitations for clinical use, including a significant degree of complexity and a high cost. Alternatively, methods for visually scoring or quantifying the amount of terminal body and facial hair growth have been in use since the early 1920s; these methods are semi-quantitative at best and subject to significant inter-observer variability. The most common visual method of scoring the extent of body and facial terminal hair growth in use today is based on a modification of the method originally described by Ferriman and Gallwey in 1961 (i.e. the mFG method). CONCLUSION Overall, the mFG scoring method is a useful visual instrument for assessing excess terminal hair growth, and the presence of hirsutism, in women. PMID:19567450

  20. Vision and quality-of-life.

    PubMed Central

    Brown, G C

    1999-01-01

    OBJECTIVE: To determine the relationship of visual acuity loss to quality of life. DESIGN: Three hundred twenty-five patients with visual loss to a minimum of 20/40 or greater in at least 1 eye were interviewed in a standardized fashion using a modified VF-14, questionnaire. Utility values were also obtained using both the time trade-off and standard gamble methods of utility assessment. MAIN OUTCOME MEASURES: Best-corrected visual acuity was correlated with the visual function score on the modified VF-14 questionnaire, as well as with utility values obtained using both the time trade-off and standard gamble methods. RESULTS: Decreasing levels of vision in the eye with better acuity correlated directly with decreasing visual function scores on the modified VF-14 questionnaire, as did decreasing utility values using the time trade-off method of utility evaluation. The standard gamble method of utility evaluation was not as directly correlated with vision as the time trade-off method. Age, level of education, gender, race, length of time of visual loss, and the number of associated systemic comorbidities did not significantly affect the time trade-off utility values associated with visual loss in the better eye. The level of reduced vision in the better eye, rather than the specific disease process causing reduced vision, was related to mean utility values. The average person with 20/40 vision in the better seeing eye was willing to trade 2 of every 10 years of life in return for perfect vision (utility value of 0.8), while the average person with counting fingers vision in the better eye was willing to trade approximately 5 of every 10 remaining years of life (utility value of 0.52) in return for perfect vision. CONCLUSIONS: The time trade-off method of utility evaluation appears to be an effective method for assessing quality of life associated with visual loss. Time trade-off utility values decrease in direct conjunction with decreasing vision in the better-seeing eye. Unlike the modified VF-14 test and its counterparts, utility values allow the quality of life associated with visual loss to be more readily compared to the quality of life associated with other health (disease) states. This information can be employed for cost-effective analyses that objectively compare evidence-based medicine, patient-based preferences and sound econometric principles across all specialties in health care. PMID:10703139

  1. ChemMaps: Towards an approach for visualizing the chemical space based on adaptive satellite compounds

    PubMed Central

    Naveja, J. Jesús; Medina-Franco, José L.

    2017-01-01

    We present a novel approach called ChemMaps for visualizing chemical space based on the similarity matrix of compound datasets generated with molecular fingerprints’ similarity. The method uses a ‘satellites’ approach, where satellites are, in principle, molecules whose similarity to the rest of the molecules in the database provides sufficient information for generating a visualization of the chemical space. Such an approach could help make chemical space visualizations more efficient. We hereby describe a proof-of-principle application of the method to various databases that have different diversity measures. Unsurprisingly, we found the method works better with databases that have low 2D diversity. 3D diversity played a secondary role, although it seems to be more relevant as 2D diversity increases. For less diverse datasets, taking as few as 25% satellites seems to be sufficient for a fair depiction of the chemical space. We propose to iteratively increase the satellites number by a factor of 5% relative to the whole database, and stop when the new and the prior chemical space correlate highly. This Research Note represents a first exploratory step, prior to the full application of this method for several datasets. PMID:28794856

  2. ChemMaps: Towards an approach for visualizing the chemical space based on adaptive satellite compounds.

    PubMed

    Naveja, J Jesús; Medina-Franco, José L

    2017-01-01

    We present a novel approach called ChemMaps for visualizing chemical space based on the similarity matrix of compound datasets generated with molecular fingerprints' similarity. The method uses a 'satellites' approach, where satellites are, in principle, molecules whose similarity to the rest of the molecules in the database provides sufficient information for generating a visualization of the chemical space. Such an approach could help make chemical space visualizations more efficient. We hereby describe a proof-of-principle application of the method to various databases that have different diversity measures. Unsurprisingly, we found the method works better with databases that have low 2D diversity. 3D diversity played a secondary role, although it seems to be more relevant as 2D diversity increases. For less diverse datasets, taking as few as 25% satellites seems to be sufficient for a fair depiction of the chemical space. We propose to iteratively increase the satellites number by a factor of 5% relative to the whole database, and stop when the new and the prior chemical space correlate highly. This Research Note represents a first exploratory step, prior to the full application of this method for several datasets.

  3. Positive Contrast Visualization of Nitinol Devices using Susceptibility Gradient Mapping

    PubMed Central

    Vonken, Evert-jan P.A.; Schär, Michael; Stuber, Matthias

    2008-01-01

    MRI visualization of devices is traditionally based on the signal loss due to T2* effects originating from the local susceptibility differences. To visualize nitinol devices with positive contrast a recently introduced post processing method is adapted to map the induced susceptibility gradients. This method operates on regular gradient echo MR images and maps the shift in k-space in a (small) neighborhood of every voxel by Fourier analysis followed by a center of mass calculation. The quantitative map of the local shifts generates the positive contrast image of the devices, while areas without susceptibility gradients render a background with noise only. The positive signal response of this method depends only on the choice of the voxel neighborhood size. The properties of the method are explained and the visualization of a nitinol wire and two stents are shown for illustration. PMID:18727096

  4. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  5. Visual affective classification by combining visual and text features.

    PubMed

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.

  6. Visual affective classification by combining visual and text features

    PubMed Central

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566

  7. Visual information mining in remote sensing image archives

    NASA Astrophysics Data System (ADS)

    Pelizzari, Andrea; Descargues, Vincent; Datcu, Mihai P.

    2002-01-01

    The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.

  8. Audio-Visual Situational Awareness for General Aviation Pilots

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Lodha, Suresh K.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Weather is one of the major causes of general aviation accidents. Researchers are addressing this problem from various perspectives including improving meteorological forecasting techniques, collecting additional weather data automatically via on-board sensors and "flight" modems, and improving weather data dissemination and presentation. We approach the problem from the improved presentation perspective and propose weather visualization and interaction methods tailored for general aviation pilots. Our system, Aviation Weather Data Visualization Environment (AWE), utilizes information visualization techniques, a direct manipulation graphical interface, and a speech-based interface to improve a pilot's situational awareness of relevant weather data. The system design is based on a user study and feedback from pilots.

  9. [Hybrid 3-D rendering of the thorax and surface-based virtual bronchoscopy in surgical and interventional therapy control].

    PubMed

    Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D

    2001-07-01

    The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.

  10. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    PubMed

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.

  11. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    PubMed

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  12. A framework for interactive visual analysis of heterogeneous marine data in an integrated problem solving environment

    NASA Astrophysics Data System (ADS)

    Liu, Shuai; Chen, Ge; Yao, Shifeng; Tian, Fenglin; Liu, Wei

    2017-07-01

    This paper presents a novel integrated marine visualization framework which focuses on processing, analyzing the multi-dimension spatiotemporal marine data in one workflow. Effective marine data visualization is needed in terms of extracting useful patterns, recognizing changes, and understanding physical processes in oceanography researches. However, the multi-source, multi-format, multi-dimension characteristics of marine data pose a challenge for interactive and feasible (timely) marine data analysis and visualization in one workflow. And, global multi-resolution virtual terrain environment is also needed to give oceanographers and the public a real geographic background reference and to help them to identify the geographical variation of ocean phenomena. This paper introduces a data integration and processing method to efficiently visualize and analyze the heterogeneous marine data. Based on the data we processed, several GPU-based visualization methods are explored to interactively demonstrate marine data. GPU-tessellated global terrain rendering using ETOPO1 data is realized and the video memory usage is controlled to ensure high efficiency. A modified ray-casting algorithm for the uneven multi-section Argo volume data is also presented and the transfer function is designed to analyze the 3D structure of ocean phenomena. Based on the framework we designed, an integrated visualization system is realized. The effectiveness and efficiency of the framework is demonstrated. This system is expected to make a significant contribution to the demonstration and understanding of marine physical process in a virtual global environment.

  13. Texture-specific bag of visual words model and spatial cone matching-based method for the retrieval of focal liver lesions using multiphase contrast-enhanced CT images.

    PubMed

    Xu, Yingying; Lin, Lanfen; Hu, Hongjie; Wang, Dan; Zhu, Wenchao; Wang, Jian; Han, Xian-Hua; Chen, Yen-Wei

    2018-01-01

    The bag of visual words (BoVW) model is a powerful tool for feature representation that can integrate various handcrafted features like intensity, texture, and spatial information. In this paper, we propose a novel BoVW-based method that incorporates texture and spatial information for the content-based image retrieval to assist radiologists in clinical diagnosis. This paper presents a texture-specific BoVW method to represent focal liver lesions (FLLs). Pixels in the region of interest (ROI) are classified into nine texture categories using the rotation-invariant uniform local binary pattern method. The BoVW-based features are calculated for each texture category. In addition, a spatial cone matching (SCM)-based representation strategy is proposed to describe the spatial information of the visual words in the ROI. In a pilot study, eight radiologists with different clinical experience performed diagnoses for 20 cases with and without the top six retrieved results. A total of 132 multiphase computed tomography volumes including five pathological types were collected. The texture-specific BoVW was compared to other BoVW-based methods using the constructed dataset of FLLs. The results show that our proposed model outperforms the other three BoVW methods in discriminating different lesions. The SCM method, which adds spatial information to the orderless BoVW model, impacted the retrieval performance. In the pilot trial, the average diagnosis accuracy of the radiologists was improved from 66 to 80% using the retrieval system. The preliminary results indicate that the texture-specific features and the SCM-based BoVW features can effectively characterize various liver lesions. The retrieval system has the potential to improve the diagnostic accuracy and the confidence of the radiologists.

  14. Development and Standardization of an Alienation Scale for Visually Impaired Students

    ERIC Educational Resources Information Center

    Punia, Poonam; Berwal, Sandeep

    2017-01-01

    Introduction: The present study was undertaken to develop a valid and reliable scale for measuring a feeling of alienation in students with visual impairments (that is, those who are blind or have low vision). Methods: In this study, a pool of 60 items was generated to develop an Alienation Scale for Visually Impaired Students (AL-VI) based on a…

  15. Implications of Sustained and Transient Channels for Theories of Visual Pattern Masking, Saccadic Suppression, and Information Processing

    ERIC Educational Resources Information Center

    Breitmeyer, Bruno G.; Ganz, Leo

    1976-01-01

    This paper reviewed briefly the major types of masking effects obtained with various methods and the major theories or models that have been proposed to account for these effects, and outlined a three-mechanism model of visual pattern masking based on psychophysical and neurophysiological properties of the visual system. (Author/RK)

  16. Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules

    NASA Astrophysics Data System (ADS)

    Hassanpour, Saeed; O'Connor, Martin J.; Das, Amar K.

    Rule bases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rule bases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rule base. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rule bases, enabling their acquisition and management.

  17. Toward a perceptual video-quality metric

    NASA Astrophysics Data System (ADS)

    Watson, Andrew B.

    1998-07-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.

  18. Atlas and feature based 3D pathway visualization enhancement for skull base pre-operative fast planning from head CT

    NASA Astrophysics Data System (ADS)

    Aghdasi, Nava; Li, Yangming; Berens, Angelique; Moe, Kris S.; Bly, Randall A.; Hannaford, Blake

    2015-03-01

    Minimally invasive neuroendoscopic surgery provides an alternative to open craniotomy for many skull base lesions. These techniques provides a great benefit to the patient through shorter ICU stays, decreased post-operative pain and quicker return to baseline function. However, density of critical neurovascular structures at the skull base makes planning for these procedures highly complex. Furthermore, additional surgical portals are often used to improve visualization and instrument access, which adds to the complexity of pre-operative planning. Surgical approach planning is currently limited and typically involves review of 2D axial, coronal, and sagittal CT and MRI images. In addition, skull base surgeons manually change the visualization effect to review all possible approaches to the target lesion and achieve an optimal surgical plan. This cumbersome process relies heavily on surgeon experience and it does not allow for 3D visualization. In this paper, we describe a rapid pre-operative planning system for skull base surgery using the following two novel concepts: importance-based highlight and mobile portal. With this innovation, critical areas in the 3D CT model are highlighted based on segmentation results. Mobile portals allow surgeons to review multiple potential entry portals in real-time with improved visualization of critical structures located inside the pathway. To achieve this we used the following methods: (1) novel bone-only atlases were manually generated, (2) orbits and the center of the skull serve as features to quickly pre-align the patient's scan with the atlas, (3) deformable registration technique was used for fine alignment, (4) surgical importance was assigned to each voxel according to a surgical dictionary, and (5) pre-defined transfer function was applied to the processed data to highlight important structures. The proposed idea was fully implemented as independent planning software and additional data are used for verification and validation. The experimental results show: (1) the proposed methods provided greatly improved planning efficiency while optimal surgical plans were successfully achieved, (2) the proposed methods successfully highlighted important structures and facilitated planning, (3) the proposed methods require shorter processing time than classical segmentation algorithms, and (4) these methods can be used to improve surgical safety for surgical robots.

  19. A component-based software environment for visualizing large macromolecular assemblies.

    PubMed

    Sanner, Michel F

    2005-03-01

    The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.

  20. Deep visual-semantic for crowded video understanding

    NASA Astrophysics Data System (ADS)

    Deng, Chunhua; Zhang, Junwen

    2018-03-01

    Visual-semantic features play a vital role for crowded video understanding. Convolutional Neural Networks (CNNs) have experienced a significant breakthrough in learning representations from images. However, the learning of visualsemantic features, and how it can be effectively extracted for video analysis, still remains a challenging task. In this study, we propose a novel visual-semantic method to capture both appearance and dynamic representations. In particular, we propose a spatial context method, based on the fractional Fisher vector (FV) encoding on CNN features, which can be regarded as our main contribution. In addition, to capture temporal context information, we also applied fractional encoding method on dynamic images. Experimental results on the WWW crowed video dataset demonstrate that the proposed method outperform the state of the art.

  1. Liquid-to-gel transition for visual and tactile detection of biological analytes.

    PubMed

    Fedotova, Tatiana A; Kolpashchikov, Dmitry M

    2017-11-23

    So far all visual and instrument-free methods have been based on a color change. However, colorimetric assays cannot be used by blind or color-blind people. Here we introduce a liquid-to-gel transition as a general output platform. The signal output (a piece of gel) can be unambiguously distinguished from liquid both visually and by touch. This approach promises to contribute to the development of an accessible environment for visually impaired persons.

  2. A neotropical Miocene pollen database employing image-based search and semantic modeling.

    PubMed

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-08-01

    Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.

  3. Drivers’ Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving

    PubMed Central

    Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan

    2016-01-01

    This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers’ visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms. PMID:26784203

  4. Drivers' Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving.

    PubMed

    Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan

    2016-01-15

    This paper describes a real-time motion planner based on the drivers' visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers' visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers' visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms.

  5. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  6. Problem solving of student with visual impairment related to mathematical literacy problem

    NASA Astrophysics Data System (ADS)

    Pratama, A. R.; Saputro, D. R. S.; Riyadi

    2018-04-01

    The student with visual impairment, total blind category depends on the sense of touch and hearing in obtaining information. In fact, the two senses can receive information less than 20%. Thus, students with visual impairment of the total blind categories in the learning process must have difficulty, including learning mathematics. This study aims to describe the problem-solving process of the student with visual impairment, total blind category on mathematical literacy issues based on Polya phase. This research using test method similar problems mathematical literacy in PISA and in-depth interviews. The subject of this study was a student with visual impairment, total blind category. Based on the result of the research, problem-solving related to mathematical literacy based on Polya phase is quite good. In the phase of understanding the problem, the student read about twice by brushing the text and assisted with information through hearing three times. The student with visual impairment in problem-solving based on the Polya phase, devising a plan by summoning knowledge and experience gained previously. At the phase of carrying out the plan, students with visual impairment implement the plan in accordance with pre-made. In the looking back phase, students with visual impairment need to check the answers three times but have not been able to find a way.

  7. Tactile mental body parts representation in obesity.

    PubMed

    Scarpina, Federica; Castelnuovo, Gianluca; Molinari, Enrico

    2014-12-30

    Obese people׳s distortions in visually-based mental body-parts representations have been reported in previous studies, but other sensory modalities have largely been neglected. In the present study, we investigated possible differences in tactilely-based body-parts representation between an obese and a healthy-weight group; additionally we explore the possible relationship between the tactile- and the visually-based body representation. Participants were asked to estimate the distance between two tactile stimuli that were simultaneously administered on the arm or on the abdomen, in the absence of visual input. The visually-based body-parts representation was investigated by a visual imagery method in which subjects were instructed to compare the horizontal extension of body part pairs. According to the results, the obese participants overestimated the size of the tactilely-perceived distances more than the healthy-weight group when the arm, and not the abdomen, was stimulated. Moreover, they reported a lower level of accuracy than did the healthy-weight group when estimating horizontal distances relative to their bodies, confirming an inappropriate visually-based mental body representation. Our results imply that body representation disturbance in obese people is not limited to the visual mental domain, but it spreads to the tactilely perceived distances. The inaccuracy was not a generalized tendency but was body-part related. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  9. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  10. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  11. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian

    2018-01-01

    In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.

  12. Calibration and comparison of the acoustic location methods used during the spring migration of the bowhead whale, Balaena mysticetus, off Pt. Barrow, Alaska, 1984-1993.

    PubMed

    Clark, C W; Ellison, W T

    2000-06-01

    Between 1984 and 1993, visual and acoustic methods were combined to census the Bering-Chukchi-Beaufort bowhead whale, Balaena mysticetus, population. Passive acoustic location was based on arrival-time differences of transient bowhead sounds detected on sparse arrays of three to five hydrophones distributed over distances of 1.5-4.5 km along the ice edge. Arrival-time differences were calculated from either digital cross correlation of spectrograms (old method), or digital cross correlation of time waveforms (new method). Acoustic calibration was conducted in situ in 1985 at five sites with visual site position determined by triangulation using two theodolites. The discrepancy between visual and acoustic locations was <1%-5% of visual range and less than 0.7 degrees of visual bearing for either method. Comparison of calibration results indicates that the new method yielded slightly more precise and accurate positions than the old method. Comparison of 217 bowhead whale call locations from both acoustic methods showed that the new method was more precise, with location errors 3-4 times smaller than the old method. Overall, low-frequency bowhead transients were reliably located out to ranges of 3-4 times array size. At these ranges in shallow water, signal propagation appears to be dominated by the fundamental mode and is not corrupted by multipath.

  13. Using a visual programming language to bridge the cognitive gap between a novice's mental model and program code

    NASA Astrophysics Data System (ADS)

    Smith, Bryan J.

    Current research suggests that many students do not know how to program very well at the conclusion of their introductory programming course. We believe that a reason novices have such difficulties learning programming is because engineering novices often learn through a lecture format where someone with programming knowledge lectures to novices, the novices attempt to absorb the content, and then reproduce it during exams. By primarily appealing to programming novices who prefer to understand visually, we research whether programming novices understand programming better if computer science concepts are presented using a visual programming language than if these programs are presented using a text-based programming language. This method builds upon previous research that suggests that most engineering students are visual learners, and we propose that using a flow-based visual programming language will address some of the most important and difficult topics to novices of programming. We use an existing flow-model tool, RAPTOR, to test this method, and share the program understanding results using this theory.

  14. Biomarkers and Surrogate Endpoints in Drug Development: A European Regulatory View.

    PubMed

    Wickström, Kerstin; Moseley, Jane

    2017-05-01

    To give a European regulatory overview of the requirements on and the use of biomarkers or surrogate endpoints in the development of drugs for ocular disease. Definitions, methods to validate new markers, and circumstances where surrogate endpoints can be appropriate are summarized. The key endpoints that have been used in registration studies so far are based on visual acuity, signs, and symptoms, or on surrogate endpoints. In some ocular conditions, established outcome measures such as those based on visual acuity or visual field are not feasible (as with slowly progressing diseases), or lack relevance (e.g., when central visual acuity may be preserved even though the patient is legally blind owing to a severely restricted visual field, or vice versa). There are several ocular conditions for which there is an unmet medical need. In some of these conditions, surrogate endpoints as well as new clinical endpoints are needed to help speed up patient access to new medicines. Interaction with European regulators through the pathway specific for the development of biomarkers or novel methods is encouraged.

  15. Region of interest extraction based on multiscale visual saliency analysis for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan

    2015-01-01

    Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.

  16. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  17. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  18. 3D photo mosaicing of Tagiri shallow vent field by an autonomous underwater vehicle (3rd report) - Mosaicing method based on navigation data and visual features -

    NASA Astrophysics Data System (ADS)

    Maki, Toshihiro; Ura, Tamaki; Singh, Hanumant; Sakamaki, Takashi

    Large-area seafloor imaging will bring significant benefits to various fields such as academics, resource survey, marine development, security, and search-and-rescue. The authors have proposed a navigation method of an autonomous underwater vehicle for seafloor imaging, and verified its performance through mapping tubeworm colonies with the area of 3,000 square meters using the AUV Tri-Dog 1 at Tagiri vent field, Kagoshima bay in Japan (Maki et al., 2008, 2009). This paper proposes a post-processing method to build a natural photo mosaic from a number of pictures taken by an underwater platform. The method firstly removes lens distortion, invariances of color and lighting from each image, and then ortho-rectification is performed based on camera pose and seafloor estimated by navigation data. The image alignment is based on both navigation data and visual characteristics, implemented as an expansion of the image based method (Pizarro et al., 2003). Using the two types of information realizes an image alignment that is consistent both globally and locally, as well as making the method applicable to data sets with little visual keys. The method was evaluated using a data set obtained by the AUV Tri-Dog 1 at the vent field in Sep. 2009. A seamless, uniformly illuminated photo mosaic covering the area of around 500 square meters was created from 391 pictures, which covers unique features of the field such as bacteria mats and tubeworm colonies.

  19. Skimming Digits: Neuromorphic Classification of Spike-Encoded Images

    PubMed Central

    Cohen, Gregory K.; Orchard, Garrick; Leng, Sio-Hoi; Tapson, Jonathan; Benosman, Ryad B.; van Schaik, André

    2016-01-01

    The growing demands placed upon the field of computer vision have renewed the focus on alternative visual scene representations and processing paradigms. Silicon retinea provide an alternative means of imaging the visual environment, and produce frame-free spatio-temporal data. This paper presents an investigation into event-based digit classification using N-MNIST, a neuromorphic dataset created with a silicon retina, and the Synaptic Kernel Inverse Method (SKIM), a learning method based on principles of dendritic computation. As this work represents the first large-scale and multi-class classification task performed using the SKIM network, it explores different training patterns and output determination methods necessary to extend the original SKIM method to support multi-class problems. Making use of SKIM networks applied to real-world datasets, implementing the largest hidden layer sizes and simultaneously training the largest number of output neurons, the classification system achieved a best-case accuracy of 92.87% for a network containing 10,000 hidden layer neurons. These results represent the highest accuracies achieved against the dataset to date and serve to validate the application of the SKIM method to event-based visual classification tasks. Additionally, the study found that using a square pulse as the supervisory training signal produced the highest accuracy for most output determination methods, but the results also demonstrate that an exponential pattern is better suited to hardware implementations as it makes use of the simplest output determination method based on the maximum value. PMID:27199646

  20. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  1. Exploring the potential of analysing visual search behaviour data using FROC (free-response receiver operating characteristic) method: an initial study

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Dias, Sarah; Stone, William; Dias, Joseph; Rout, John; Gale, Alastair G.

    2017-03-01

    Visual search techniques and FROC analysis have been widely used in radiology to understand medical image perceptual behaviour and diagnostic performance. The potential of exploiting the advantages of both methodologies is of great interest to medical researchers. In this study, eye tracking data of eight dental practitioners was investigated. The visual search measures and their analyses are considered here. Each participant interpreted 20 dental radiographs which were chosen by an expert dental radiologist. Various eye movement measurements were obtained based on image area of interest (AOI) information. FROC analysis was then carried out by using these eye movement measurements as a direct input source. The performance of FROC methods using different input parameters was tested. The results showed that there were significant differences in FROC measures, based on eye movement data, between groups with different experience levels. Namely, the area under the curve (AUC) score evidenced higher values for experienced group for the measurements of fixation and dwell time. Also, positive correlations were found for AUC scores between the eye movement data conducted FROC and rating based FROC. FROC analysis using eye movement measurements as input variables can act as a potential performance indicator to deliver assessment in medical imaging interpretation and assess training procedures. Visual search data analyses lead to new ways of combining eye movement data and FROC methods to provide an alternative dimension to assess performance and visual search behaviour in the area of medical imaging perceptual tasks.

  2. Effects of Mental Load and Fatigue on Steady-State Evoked Potential Based Brain Computer Interface Tasks: A Comparison of Periodic Flickering and Motion-Reversal Based Visual Attention.

    PubMed

    Xie, Jun; Xu, Guanghua; Wang, Jing; Li, Min; Han, Chengcheng; Jia, Yaguang

    Steady-state visual evoked potentials (SSVEP) based paradigm is a conventional BCI method with the advantages of high information transfer rate, high tolerance to artifacts and the robust performance across users. But the occurrence of mental load and fatigue when users stare at flickering stimuli is a critical problem in implementation of SSVEP-based BCIs. Based on electroencephalography (EEG) power indices α, θ, θ + α, ratio index θ/α and response properties of amplitude and SNR, this study quantitatively evaluated the mental load and fatigue in both of conventional flickering and the novel motion-reversal visual attention tasks. Results over nine subjects revealed significant mental load alleviation in motion-reversal task rather than flickering task. The interaction between factors of "stimulation type" and "fatigue level" also illustrated the motion-reversal stimulation as a superior anti-fatigue solution for long-term BCI operation. Taken together, our work provided an objective method favorable for the design of more practically applicable steady-state evoked potential based BCIs.

  3. Lesion classification using clinical and visual data fusion by multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Kisilev, Pavel; Hashoul, Sharbell; Walach, Eugene; Tzadok, Asaf

    2014-03-01

    To overcome operator dependency and to increase diagnosis accuracy in breast ultrasound (US), a lot of effort has been devoted to developing computer-aided diagnosis (CAD) systems for breast cancer detection and classification. Unfortunately, the efficacy of such CAD systems is limited since they rely on correct automatic lesions detection and localization, and on robustness of features computed based on the detected areas. In this paper we propose a new approach to boost the performance of a Machine Learning based CAD system, by combining visual and clinical data from patient files. We compute a set of visual features from breast ultrasound images, and construct the textual descriptor of patients by extracting relevant keywords from patients' clinical data files. We then use the Multiple Kernel Learning (MKL) framework to train SVM based classifier to discriminate between benign and malignant cases. We investigate different types of data fusion methods, namely, early, late, and intermediate (MKL-based) fusion. Our database consists of 408 patient cases, each containing US images, textual description of complaints and symptoms filled by physicians, and confirmed diagnoses. We show experimentally that the proposed MKL-based approach is superior to other classification methods. Even though the clinical data is very sparse and noisy, its MKL-based fusion with visual features yields significant improvement of the classification accuracy, as compared to the image features only based classifier.

  4. Design of an off-axis visual display based on a free-form projection screen to realize stereo vision

    NASA Astrophysics Data System (ADS)

    Zhao, Yuanming; Cui, Qingfeng; Piao, Mingxu; Zhao, Lidong

    2017-10-01

    A free-form projection screen is designed for an off-axis visual display, which shows great potential in applications such as flight training for providing both accommodation and convergence cues for pilots. The method based on point cloud is proposed for the design of the free-form surface, and the design of the point cloud is controlled by a program written in the macro-language. In the visual display based on the free-form projection screen, when the error of the screen along Z-axis is 1 mm, the error of visual distance at each filed is less than 1%. And the resolution of the design for full field is better than 1‧, which meet the requirement of resolution for human eyes.

  5. Kristi Potter | NREL

    Science.gov Websites

    on methods to improve visualization techniques by adding qualitative information regarding sketch-based methods for conveying levels of reliability in architectural renderings. Dr. Potter

  6. On the Ground or in the Air? A Methodological Experiment on Crop Residue Cover Measurement in Ethiopia

    NASA Astrophysics Data System (ADS)

    Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash

    2017-10-01

    Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.

  7. On the Ground or in the Air? A Methodological Experiment on Crop Residue Cover Measurement in Ethiopia.

    PubMed

    Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash

    2017-10-01

    Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.

  8. InteGO2: A web tool for measuring and visualizing gene semantic similarities using Gene Ontology

    DOE PAGES

    Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang; ...

    2016-08-31

    Here, the Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. As a result, we present InteGO2, a web toolmore » that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. In conclusion, InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface.« less

  9. InteGO2: a web tool for measuring and visualizing gene semantic similarities using Gene Ontology.

    PubMed

    Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang; Juan, Liran; Jiang, Qinghua; Wang, Yadong; Chen, Jin

    2016-08-31

    The Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. We present InteGO2, a web tool that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface. InteGO2 can be accessed via http://mlg.hit.edu.cn:8089/ .

  10. InteGO2: A web tool for measuring and visualizing gene semantic similarities using Gene Ontology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang

    Here, the Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. As a result, we present InteGO2, a web toolmore » that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. In conclusion, InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface.« less

  11. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    PubMed

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Towards a Visual Quality Metric for Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1998-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  13. Automated Assessment of Visual Quality of Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  14. Toward Mixed Method Evaluations of Scientific Visualizations and Design Process as an Evaluation Tool.

    PubMed

    Jackson, Bret; Coffey, Dane; Thorson, Lauren; Schroeder, David; Ellingson, Arin M; Nuckley, David J; Keefe, Daniel F

    2012-10-01

    In this position paper we discuss successes and limitations of current evaluation strategies for scientific visualizations and argue for embracing a mixed methods strategy of evaluation. The most novel contribution of the approach that we advocate is a new emphasis on employing design processes as practiced in related fields (e.g., graphic design, illustration, architecture) as a formalized mode of evaluation for data visualizations. To motivate this position we describe a series of recent evaluations of scientific visualization interfaces and computer graphics strategies conducted within our research group. Complementing these more traditional evaluations our visualization research group also regularly employs sketching, critique, and other design methods that have been formalized over years of practice in design fields. Our experience has convinced us that these activities are invaluable, often providing much more detailed evaluative feedback about our visualization systems than that obtained via more traditional user studies and the like. We believe that if design-based evaluation methodologies (e.g., ideation, sketching, critique) can be taught and embraced within the visualization community then these may become one of the most effective future strategies for both formative and summative evaluations.

  15. Toward Mixed Method Evaluations of Scientific Visualizations and Design Process as an Evaluation Tool

    PubMed Central

    Jackson, Bret; Coffey, Dane; Thorson, Lauren; Schroeder, David; Ellingson, Arin M.; Nuckley, David J.

    2017-01-01

    In this position paper we discuss successes and limitations of current evaluation strategies for scientific visualizations and argue for embracing a mixed methods strategy of evaluation. The most novel contribution of the approach that we advocate is a new emphasis on employing design processes as practiced in related fields (e.g., graphic design, illustration, architecture) as a formalized mode of evaluation for data visualizations. To motivate this position we describe a series of recent evaluations of scientific visualization interfaces and computer graphics strategies conducted within our research group. Complementing these more traditional evaluations our visualization research group also regularly employs sketching, critique, and other design methods that have been formalized over years of practice in design fields. Our experience has convinced us that these activities are invaluable, often providing much more detailed evaluative feedback about our visualization systems than that obtained via more traditional user studies and the like. We believe that if design-based evaluation methodologies (e.g., ideation, sketching, critique) can be taught and embraced within the visualization community then these may become one of the most effective future strategies for both formative and summative evaluations. PMID:28944349

  16. Augmentation of French grunt diet description using combined visual and DNA-based analyses

    USGS Publications Warehouse

    Hargrove, John S.; Parkyn, Daryl C.; Murie, Debra J.; Demopoulos, Amanda W.J.; Austin, James D.

    2012-01-01

    Trophic linkages within a coral-reef ecosystem may be difficult to discern in fish species that reside on, but do not forage on, coral reefs. Furthermore, dietary analysis of fish can be difficult in situations where prey is thoroughly macerated, resulting in many visually unrecognisable food items. The present study examined whether the inclusion of a DNA-based method could improve the identification of prey consumed by French grunt, Haemulon flavolineatum, a reef fish that possesses pharyngeal teeth and forages on soft-bodied prey items. Visual analysis indicated that crustaceans were most abundant numerically (38.9%), followed by sipunculans (31.0%) and polychaete worms (5.2%), with a substantial number of unidentified prey (12.7%). For the subset of prey with both visual and molecular data, there was a marked reduction in the number of unidentified sipunculans (visual – 31.1%, combined &ndash 4.4%), unidentified crustaceans (visual &ndash 15.6%, combined &ndash 6.7%), and unidentified taxa (visual &ndash 11.1%, combined &ndash 0.0%). Utilising results from both methodologies resulted in an increased number of prey placed at the family level (visual &ndash 6, combined &ndash 33) and species level (visual &ndash 0, combined &ndash 4). Although more costly than visual analysis alone, our study demonstrated the feasibility of DNA-based identification of visually unidentifiable prey in the stomach contents of fish.

  17. Improving data retention in EEG research with children using child-centered eye tracking

    PubMed Central

    Maguire, Mandy J.; Magnon, Grant; Fitzhugh, Anna E.

    2014-01-01

    Background Event Related Potentials (ERPs) elicited by visual stimuli have increased our understanding of developmental disorders and adult cognitive abilities for decades; however, these studies are very difficult with populations who cannot sustain visual attention such as infants and young children. Current methods for studying such populations include requiring a button response, which may be impossible for some participants, and experimenter monitoring, which is subject to error, highly variable, and spatially imprecise. New Method We developed a child-centered methodology to integrate EEG data acquisition and eye-tracking technologies that uses “attention-getters” in which stimulus display is contingent upon the child’s gaze. The goal was to increase the number of trials retained. Additionally, we used the eye-tracker to categorize and analyze the EEG data based on gaze to specific areas of the visual display, compared to analyzing based on stimulus presentation. Results Compared with Existing Methods The number of trials retained was substantially improved using the child-centered methodology compared to a button-press response in 7–8 year olds. In contrast, analyzing the EEG based on eye gaze to specific points within the visual display as opposed to stimulus presentation provided too few trials for reliable interpretation. Conclusions By using the linked EEG-eye-tracker we significantly increased data retention. With this method, studies can be completed with fewer participants and a wider range of populations. However, caution should be used when epoching based on participants’ eye gaze because, in this case, this technique provided substantially fewer trials. PMID:25251555

  18. EAPhy: A Flexible Tool for High-throughput Quality Filtering of Exon-alignments and Data Processing for Phylogenetic Methods.

    PubMed

    Blom, Mozes P K

    2015-08-05

    Recently developed molecular methods enable geneticists to target and sequence thousands of orthologous loci and infer evolutionary relationships across the tree of life. Large numbers of genetic markers benefit species tree inference but visual inspection of alignment quality, as traditionally conducted, is challenging with thousands of loci. Furthermore, due to the impracticality of repeated visual inspection with alternative filtering criteria, the potential consequences of using datasets with different degrees of missing data remain nominally explored in most empirical phylogenomic studies. In this short communication, I describe a flexible high-throughput pipeline designed to assess alignment quality and filter exonic sequence data for subsequent inference. The stringency criteria for alignment quality and missing data can be adapted based on the expected level of sequence divergence. Each alignment is automatically evaluated based on the stringency criteria specified, significantly reducing the number of alignments that require visual inspection. By developing a rapid method for alignment filtering and quality assessment, the consistency of phylogenetic estimation based on exonic sequence alignments can be further explored across distinct inference methods, while accounting for different degrees of missing data.

  19. Initial Validation Study for a Scale Used to Determine Service Intensity for Itinerant Teachers of Students with Visual Impairments

    ERIC Educational Resources Information Center

    Pogrund, Rona L.; Darst, Shannon; Munro, Michael P.

    2015-01-01

    Introduction: The purpose of this study was to begin validation of a scale that will be used by teachers of students with visual impairments to determine appropriate recommended type and frequency of services for their students based on identified student need. Methods: Validity and reliability of the Visual Impairment Scale of Service Intensity…

  20. Using Visualization in Cockpit Decision Support Systems

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.

    2005-01-01

    In order to safely operate their aircraft, pilots must make rapid decisions based on integrating and processing large amounts of heterogeneous information. Visual displays are often the most efficient method of presenting safety-critical data to pilots in real time. However, care must be taken to ensure the pilot is provided with the appropriate amount of information to make effective decisions and not become cognitively overloaded. The results of two usability studies of a prototype airflow hazard visualization cockpit decision support system are summarized. The studies demonstrate that such a system significantly improves the performance of helicopter pilots landing under turbulent conditions. Based on these results, design principles and implications for cockpit decision support systems using visualization are presented.

  1. Mobile Visual Search Based on Histogram Matching and Zone Weight Learning

    NASA Astrophysics Data System (ADS)

    Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong

    2018-01-01

    In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.

  2. Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?

    PubMed

    McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh

    2014-05-01

    Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.

  3. A visual servo-based teleoperation robot system for closed diaphyseal fracture reduction.

    PubMed

    Li, Changsheng; Wang, Tianmiao; Hu, Lei; Zhang, Lihai; Du, Hailong; Zhao, Lu; Wang, Lifeng; Tang, Peifu

    2015-09-01

    Common fracture treatments include open reduction and intramedullary nailing technology. However, these methods have disadvantages such as intraoperative X-ray radiation, delayed union or nonunion and postoperative rotation. Robots provide a novel solution to the aforementioned problems while posing new challenges. Against this scientific background, we develop a visual servo-based teleoperation robot system. In this article, we present a robot system, analyze the visual servo-based control system in detail and develop path planning for fracture reduction, inverse kinematics, and output forces of the reduction mechanism. A series of experimental tests is conducted on a bone model and an animal bone. The experimental results demonstrate the feasibility of the robot system. The robot system uses preoperative computed tomography data to realize high precision and perform minimally invasive teleoperation for fracture reduction via the visual servo-based control system while protecting surgeons from radiation. © IMechE 2015.

  4. A Study on Attention Guidance to Driver by Subliminal Visual Information

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroshi; Honda, Hirohiko

    This paper presents a new warning method for increasing drivers' sensitivity for recognizing hazardous factors in the driving environment. The method is based on a subliminal effect. The results of many experiments performed by three dimensional head-mounted display shows that the response time for detecting a flashing mark tended to decrease when a subliminal mark was shown in advance. Priming effects are observed in subliminal visual information. This paper also proposes a scenario for implementing this method in real vehicles.

  5. Methods of visualizing graphs

    DOEpatents

    Wong, Pak C.; Mackey, Patrick S.; Perrine, Kenneth A.; Foote, Harlan P.; Thomas, James J.

    2008-12-23

    Methods for visualizing a graph by automatically drawing elements of the graph as labels are disclosed. In one embodiment, the method comprises receiving node information and edge information from an input device and/or communication interface, constructing a graph layout based at least in part on that information, wherein the edges are automatically drawn as labels, and displaying the graph on a display device according to the graph layout. In some embodiments, the nodes are automatically drawn as labels instead of, or in addition to, the label-edges.

  6. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2015-03-03

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes multiple operand names, each operand corresponding to one or more fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first operands with the columns shelf and to associate one or more second operands with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first operands, and each pane has a y-axis defined based on data for the one or more second operands.

  7. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2015-11-10

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes a plurality of fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first fields with the columns shelf and to associate one or more second fields with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first fields, and each pane has a y-axis defined based on data for the one or more second fields.

  8. Visual function and fitness to drive.

    PubMed

    Kotecha, Aachal; Spratt, Alexander; Viswanathan, Ananth

    2008-01-01

    Driving is recognized to be a visually intensive task and accordingly there is a legal minimum standard of vision required for all motorists. The purpose of this paper is to review the current United Kingdom (UK) visual requirements for driving and discuss the evidence base behind these legal rules. The role of newer, alternative tests of visual function that may be better indicators of driving safety will also be considered. Finally, the implications of ageing on driving ability are discussed. A search of Medline and PubMed databases was performed using the following keywords: driving, vision, visual function, fitness to drive and ageing. In addition, papers from the Department of Transport website and UK Royal College of Ophthalmologists guidelines were studied. Current UK visual standards for driving are based upon historical concepts, but recent advances in technology have brought about more sophisticated methods for assessing the status of the binocular visual field and examining visual attention. These tests appear to be better predictors of driving performance. Further work is required to establish whether these newer tests should be incorporated in the current UK visual standards when examining an individual's fitness to drive.

  9. New insight in spiral drawing analysis methods - Application to action tremor quantification.

    PubMed

    Legrand, André Pierre; Rivals, Isabelle; Richard, Aliénor; Apartis, Emmanuelle; Roze, Emmanuel; Vidailhet, Marie; Meunier, Sabine; Hainque, Elodie

    2017-10-01

    Spiral drawing is one of the standard tests used to assess tremor severity for the clinical evaluation of medical treatments. Tremor severity is estimated through visual rating of the drawings by movement disorders experts. Different approaches based on the mathematical signal analysis of the recorded spiral drawings were proposed to replace this rater dependent estimate. The objective of the present study is to propose new numerical methods and to evaluate them in terms of agreement with visual rating and reproducibility. Series of spiral drawings of patients with essential tremor were visually rated by a board of experts. In addition to the usual velocity analysis, three new numerical methods were tested and compared, namely static and dynamic unraveling, and empirical mode decomposition. The reproducibility of both visual and numerical ratings was estimated, and their agreement was evaluated. The statistical analysis demonstrated excellent agreement between visual and numerical ratings, and more reproducible results with numerical methods than with visual ratings. The velocity method and the new numerical methods are in good agreement. Among the latter, static and dynamic unravelling both display a smaller dispersion and are easier for automatic analysis. The reliable scores obtained through the proposed numerical methods allow considering that their implementation on a digitized tablet, be it connected with a computer or independent, provides an efficient automatic tool for tremor severity assessment. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  10. Two-stage Framework for a Topology-Based Projection and Visualization of Classified Document Collections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oesterling, Patrick; Scheuermann, Gerik; Teresniak, Sven

    During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point cloudsmore » given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.« less

  11. Infrared and visible image fusion scheme based on NSCT and low-level visual features

    NASA Astrophysics Data System (ADS)

    Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei

    2016-05-01

    Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.

  12. Visual feedback in stuttering therapy

    NASA Astrophysics Data System (ADS)

    Smolka, Elzbieta

    1997-02-01

    The aim of this paper is to present the results concerning the influence of visual echo and reverberation on the speech process of stutterers. Visual stimuli along with the influence of acoustic and visual-acoustic stimuli have been compared. Following this the methods of implementing visual feedback with the aid of electroluminescent diodes directed by speech signals have been presented. The concept of a computerized visual echo based on the acoustic recognition of Polish syllabic vowels has been also presented. All the research nd trials carried out at our center, aside from cognitive aims, generally aim at the development of new speech correctors to be utilized in stuttering therapy.

  13. Plug-and-play web-based visualization of mobile air monitoring data

    EPA Science Inventory

    The collection of air measurements in real-time on moving platforms, such as wearable, bicycle-mounted, or vehicle-mounted air sensors, is becoming an increasingly common method to investigate local air quality. However, visualizing and analyzing geospatial air monitoring data r...

  14. Visual attention distracter insertion for improved EEG rapid serial visual presentation (RSVP) target stimuli detection

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Martin, Kevin

    2017-05-01

    This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).

  15. EnsembleGraph: Interactive Visual Analysis of Spatial-Temporal Behavior for Ensemble Simulation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Qingya; Guo, Hanqi; Che, Limei

    We present a novel visualization framework—EnsembleGraph— for analyzing ensemble simulation data, in order to help scientists understand behavior similarities between ensemble members over space and time. A graph-based representation is used to visualize individual spatiotemporal regions with similar behaviors, which are extracted by hierarchical clustering algorithms. A user interface with multiple-linked views is provided, which enables users to explore, locate, and compare regions that have similar behaviors between and then users can investigate and analyze the selected regions in detail. The driving application of this paper is the studies on regional emission influences over tropospheric ozone, which is based onmore » ensemble simulations conducted with different anthropogenic emission absences using the MOZART-4 (model of ozone and related tracers, version 4) model. We demonstrate the effectiveness of our method by visualizing the MOZART-4 ensemble simulation data and evaluating the relative regional emission influences on tropospheric ozone concentrations. Positive feedbacks from domain experts and two case studies prove efficiency of our method.« less

  16. MAVEN-SA: Model-Based Automated Visualization for Enhanced Situation Awareness

    DTIC Science & Technology

    2005-11-01

    34 methods. But historically, as arts evolve, these how to methods become systematized and codified (e.g. the development and refinement of color theory ...schema (as necessary) 3. Draw inferences from new knowledge to support decision making process 33 Visual language theory suggests that humans process...informed by theories of learning. Over the years, many types of software have been developed to support student learning. The various types of

  17. Visualizing Mobility of Public Transportation System.

    PubMed

    Zeng, Wei; Fu, Chi-Wing; Arisona, Stefan Müller; Erath, Alexander; Qu, Huamin

    2014-12-01

    Public transportation systems (PTSs) play an important role in modern cities, providing shared/massive transportation services that are essential for the general public. However, due to their increasing complexity, designing effective methods to visualize and explore PTS is highly challenging. Most existing techniques employ network visualization methods and focus on showing the network topology across stops while ignoring various mobility-related factors such as riding time, transfer time, waiting time, and round-the-clock patterns. This work aims to visualize and explore passenger mobility in a PTS with a family of analytical tasks based on inputs from transportation researchers. After exploring different design alternatives, we come up with an integrated solution with three visualization modules: isochrone map view for geographical information, isotime flow map view for effective temporal information comparison and manipulation, and OD-pair journey view for detailed visual analysis of mobility factors along routes between specific origin-destination pairs. The isotime flow map linearizes a flow map into a parallel isoline representation, maximizing the visualization of mobility information along the horizontal time axis while presenting clear and smooth pathways from origin to destinations. Moreover, we devise several interactive visual query methods for users to easily explore the dynamics of PTS mobility over space and time. Lastly, we also construct a PTS mobility model from millions of real passenger trajectories, and evaluate our visualization techniques with assorted case studies with the transportation researchers.

  18. Good Features to Correlate for Visual Tracking

    NASA Astrophysics Data System (ADS)

    Gundogdu, Erhan; Alatan, A. Aydin

    2018-05-01

    During the recent years, correlation filters have shown dominant and spectacular results for visual object tracking. The types of the features that are employed in these family of trackers significantly affect the performance of visual tracking. The ultimate goal is to utilize robust features invariant to any kind of appearance change of the object, while predicting the object location as properly as in the case of no appearance change. As the deep learning based methods have emerged, the study of learning features for specific tasks has accelerated. For instance, discriminative visual tracking methods based on deep architectures have been studied with promising performance. Nevertheless, correlation filter based (CFB) trackers confine themselves to use the pre-trained networks which are trained for object classification problem. To this end, in this manuscript the problem of learning deep fully convolutional features for the CFB visual tracking is formulated. In order to learn the proposed model, a novel and efficient backpropagation algorithm is presented based on the loss function of the network. The proposed learning framework enables the network model to be flexible for a custom design. Moreover, it alleviates the dependency on the network trained for classification. Extensive performance analysis shows the efficacy of the proposed custom design in the CFB tracking framework. By fine-tuning the convolutional parts of a state-of-the-art network and integrating this model to a CFB tracker, which is the top performing one of VOT2016, 18% increase is achieved in terms of expected average overlap, and tracking failures are decreased by 25%, while maintaining the superiority over the state-of-the-art methods in OTB-2013 and OTB-2015 tracking datasets.

  19. Global motion compensated visual attention-based video watermarking

    NASA Astrophysics Data System (ADS)

    Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.

  20. A method for real-time visual stimulus selection in the study of cortical object perception.

    PubMed

    Leeds, Daniel D; Tarr, Michael J

    2016-06-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. A method for real-time visual stimulus selection in the study of cortical object perception

    PubMed Central

    Leeds, Daniel D.; Tarr, Michael J.

    2016-01-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168

  2. Chemical labelling for visualizing native AMPA receptors in live neurons

    NASA Astrophysics Data System (ADS)

    Wakayama, Sho; Kiyonaka, Shigeki; Arai, Itaru; Kakegawa, Wataru; Matsuda, Shinji; Ibata, Keiji; Nemoto, Yuri L.; Kusumi, Akihiro; Yuzaki, Michisuke; Hamachi, Itaru

    2017-04-01

    The location and number of neurotransmitter receptors are dynamically regulated at postsynaptic sites. However, currently available methods for visualizing receptor trafficking require the introduction of genetically engineered receptors into neurons, which can disrupt the normal functioning and processing of the original receptor. Here we report a powerful method for visualizing native α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA)-type glutamate receptors (AMPARs) which are essential for cognitive functions without any genetic manipulation. This is based on a covalent chemical labelling strategy driven by selective ligand-protein recognition to tether small fluorophores to AMPARs using chemical AMPAR modification (CAM) reagents. The high penetrability of CAM reagents enables visualization of native AMPARs deep in brain tissues without affecting receptor function. Moreover, CAM reagents are used to characterize the diffusion dynamics of endogenous AMPARs in both cultured neurons and hippocampal slices. This method will help clarify the involvement of AMPAR trafficking in various neuropsychiatric and neurodevelopmental disorders.

  3. FPV: fast protein visualization using Java 3D.

    PubMed

    Can, Tolga; Wang, Yujun; Wang, Yuan-Fang; Su, Jianwen

    2003-05-22

    Many tools have been developed to visualize protein structures. Tools that have been based on Java 3D((TM)) are compatible among different systems and they can be run remotely through web browsers. However, using Java 3D for visualization has some performance issues with it. The primary concerns about molecular visualization tools based on Java 3D are in their being slow in terms of interaction speed and in their inability to load large molecules. This behavior is especially apparent when the number of atoms to be displayed is huge, or when several proteins are to be displayed simultaneously for comparison. In this paper we present techniques for organizing a Java 3D scene graph to tackle these problems. We have developed a protein visualization system based on Java 3D and these techniques. We demonstrate the effectiveness of the proposed method by comparing the visualization component of our system with two other Java 3D based molecular visualization tools. In particular, for van der Waals display mode, with the efficient organization of the scene graph, we could achieve up to eight times improvement in rendering speed and could load molecules three times as large as the previous systems could. EPV is freely available with source code at the following URL: http://www.cs.ucsb.edu/~tcan/fpv/

  4. Learning to rank using user clicks and visual features for image retrieval.

    PubMed

    Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong

    2015-04-01

    The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.

  5. Data visualisation in surveillance for injury prevention and control: conceptual bases and case studies.

    PubMed

    Martinez, Ramon; Ordunez, Pedro; Soliz, Patricia N; Ballesteros, Michael F

    2016-04-01

    The complexity of current injury-related health issues demands the usage of diverse and massive data sets for comprehensive analyses, and application of novel methods to communicate data effectively to the public health community, decision-makers and the public. Recent advances in information visualisation, availability of new visual analytic methods and tools, and progress on information technology provide an opportunity for shaping the next generation of injury surveillance. To introduce data visualisation conceptual bases, and propose a visual analytic and visualisation platform in public health surveillance for injury prevention and control. The paper introduces data visualisation conceptual bases, describes a visual analytic and visualisation platform, and presents two real-world case studies illustrating their application in public health surveillance for injury prevention and control. Application of visual analytic and visualisation platform is presented as solution for improved access to heterogeneous data sources, enhance data exploration and analysis, communicate data effectively, and support decision-making. Applications of data visualisation concepts and visual analytic platform could play a key role to shape the next generation of injury surveillance. Visual analytic and visualisation platform could improve data use, the analytic capacity, and ability to effectively communicate findings and key messages. The public health surveillance community is encouraged to identify opportunities to develop and expand its use in injury prevention and control. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Evaluation of Barrier Skin Cream Effectiveness Against JP-8 Jet Fuel Absorption and Irritation

    DTIC Science & Technology

    2009-04-01

    quantify the colorimeter measurements. This system uses spectral chromaticity coordinates and corresponding color- matching functions based on...in the caudal thigh or lumbar muscles and the rabbit was monitored throughout the procedure. Once anesthetized, a baseline visual and colorimeter ...Visual Scoring Technique All barrier creams were scored in 3 ways; by visual scoring described in the Draize method, by colorimeter , and by

  7. Tools and Methods for Visualization of Mesoscale Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Liu, L.; Silver, D.; Kang, D.; Curchitser, E.

    2017-12-01

    Mesoscale ocean eddies form in the Gulf Stream and transport heat and nutrients across the ocean basin. The internal structure of these three-dimensional eddies and the kinematics with which they move are critical to a full understanding of their transport capacity. A series of visualization tools have been developed to extract, characterize, and track ocean eddies from 3D modeling results, to visually show the ocean eddy story by applying various illustrative visualization techniques, and to interactively view results stored on a server from a conventional browser. In this work, we apply a feature-based method to track instances of ocean eddies through the time steps of a high-resolution multidecadal regional ocean model and generate a series of eddy paths which reflect the life cycle of individual eddy instances. The basic method uses the Okubu-Weiss parameter to define eddy cores but could be adapted to alternative specifications of an eddy. Stored results include pixel-lists for each eddy instance, tracking metadata for eddy paths, and physical and geometric properties. In the simplest view, isosurfaces are used to display eddies along an eddy path. Individual eddies can then be selected and viewed independently or an eddy path can be viewed in the context of all eddy paths (longer than a specified duration) and the ocean basin. To tell the story of mesoscale ocean eddies, we combined illustrative visualization techniques, including visual effectiveness enhancement, focus+context, and smart visibility, with the extracted volume features to explore eddy characteristics at multiple scales from ocean basin to individual eddy. An evaluation by domain experts indicates that combining our feature-based techniques with illustrative visualization techniques provides an insight into the role eddies play in ocean circulation. A web-based GUI is under development to facilitate easy viewing of stored results. The GUI provides the user control to choose amongst available datasets, to specify the variables (such as temperature or salinity) to display on the isosurfaces, and to choose the scale and orientation of the view. These techniques allow an oceanographer to browse the data based on eddy paths and individual eddies rather than slices or volumes of data.

  8. Web-Based Geospatial Visualization of GPM Data with CesiumJS

    NASA Technical Reports Server (NTRS)

    Lammers, Matt

    2018-01-01

    Advancements in the capabilities of JavaScript frameworks and web browsing technology have made online visualization of large geospatial datasets such as those coming from precipitation satellites viable. These data benefit from being visualized on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS (http://cesiumjs.org), developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. This presentation will describe how CesiumJS has been used in three-dimensional visualization products developed as part of the NASA Precipitation Processing System (PPS) STORM data-order website. Existing methods of interacting with Global Precipitation Measurement (GPM) Mission data primarily focus on two-dimensional static images, whether displaying vertical slices or horizontal surface/height-level maps. These methods limit interactivity with the robust three-dimensional data coming from the GPM core satellite. Integrating the data with CesiumJS in a web-based user interface has allowed us to create the following products. We have linked with the data-order interface an on-the-fly visualization tool for any GPM/partner satellite orbit. A version of this tool also focuses on high-impact weather events. It enables viewing of combined radar and microwave-derived precipitation data on mobile devices and in a way that can be embedded into other websites. We also have used CesiumJS to visualize a method of integrating gridded precipitation data with modeled wind speeds that animates over time. Emphasis in the presentation will be placed on how a variety of technical methods were used to create these tools, and how the flexibility of the CesiumJS framework facilitates creative approaches to interact with the data.

  9. Interactive visual exploration and refinement of cluster assignments.

    PubMed

    Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R

    2017-09-12

    With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.

  10. Typograph: Multiscale Spatial Exploration of Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Burtner, Edwin R.; Cramer, Nicholas O.

    2013-10-06

    Visualizing large document collections using a spatial layout of terms can enable quick overviews of information. These visual metaphors (e.g., word clouds, tag clouds, etc.) traditionally show a series of terms organized by space-filling algorithms. However, often lacking in these views is the ability to interactively explore the information to gain more detail, and the location and rendering of the terms are often not based on mathematical models that maintain relative distances from other information based on similarity metrics. In this paper, we present Typograph, a multi-scale spatial exploration visualization for large document collections. Based on the term-based visualization methods,more » Typograh enables multiple levels of detail (terms, phrases, snippets, and full documents) within the single spatialization. Further, the information is placed based on their relative similarity to other information to create the “near = similar” geographic metaphor. This paper discusses the design principles and functionality of Typograph and presents a use case analyzing Wikipedia to demonstrate usage.« less

  11. Population-based survey of prevalence, causes, and risk factors for blindness and visual impairment in an aging Chinese metropolitan population

    PubMed Central

    Hu, Jian-Yan; Yan, Liang; Chen, Yong-Dong; Du, Xin-Hua; Li, Ting-Ting; Liu, De-An; Xu, Dong-Hong; Huang, Yi-Min; Wu, Qiang

    2017-01-01

    AIM To assess the prevalence, causes, and risk factors for blindness and visual impairment among elderly (≥60 years of age) Chinese people in a metropolitan area of Shanghai, China. METHODS Random cluster sampling was conducted to identify participants among residents ≥60 years of age living in the Xietu Block, Xuhui District, Shanghai, China. Presenting visual acuity (PVA) and best-corrected visual acuity (BCVA) were checked by the Early Treatment Diabetic Retinopathy Study (ETDRS) visual chart. All eligible participants underwent a comprehensive eye examination. Blindness and visual impairment were defined according to World Health Organization (WHO) criteria. RESULTS A total of 4190 persons (1688 men and 2502 women) participated in the study, and the response rate was 91.1%. Based on PVA, the prevalence of blindness was 1.1% and that of visual impairment was 7.6%. Based on BCVA, the prevalence of blindness and visual impairment decreased to 0.9% and 3.9%, respectively. Older (≥80 years of age) women, with low educational levels and smoking habits, exhibited a significantly greater chance for blindness and visual impairment than did those with high educational levels and no smoking habits (P<0.05). Based on PVA and BCVA, the main causes of blindness were cataract, myopic maculopathy, and age-related macular degeneration (AMD). CONCLUSION Our findings help to identify the population in need of intervention, to highlight the need for additional eye healthcare services in urban China. PMID:28149791

  12. Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

    PubMed

    Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

  13. Advanced Image Processing for Defect Visualization in Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Plotnikov, Yuri A.; Winfree, William P.

    1997-01-01

    Results of a defect visualization process based on pulse infrared thermography are presented. Algorithms have been developed to reduce the amount of operator participation required in the process of interpreting thermographic images. The algorithms determine the defect's depth and size from the temporal and spatial thermal distributions that exist on the surface of the investigated object following thermal excitation. A comparison of the results from thermal contrast, time derivative, and phase analysis methods for defect visualization are presented. These comparisons are based on three dimensional simulations of a test case representing a plate with multiple delaminations. Comparisons are also based on experimental data obtained from a specimen with flat bottom holes and a composite panel with delaminations.

  14. Spatiotemporal video deinterlacing using control grid interpolation

    NASA Astrophysics Data System (ADS)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  15. Feasibility and performance evaluation of generating and recording visual evoked potentials using ambulatory Bluetooth based system.

    PubMed

    Ellingson, Roger M; Oken, Barry

    2010-01-01

    Report contains the design overview and key performance measurements demonstrating the feasibility of generating and recording ambulatory visual stimulus evoked potentials using the previously reported custom Complementary and Alternative Medicine physiologic data collection and monitoring system, CAMAS. The methods used to generate visual stimuli on a PDA device and the design of an optical coupling device to convert the display to an electrical waveform which is recorded by the CAMAS base unit are presented. The optical sensor signal, synchronized to the visual stimulus emulates the brain's synchronized EEG signal input to CAMAS normally reviewed for the evoked potential response. Most importantly, the PDA also sends a marker message over the wireless Bluetooth connection to the CAMAS base unit synchronized to the visual stimulus which is the critical averaging reference component to obtain VEP results. Results show the variance in the latency of the wireless marker messaging link is consistent enough to support the generation and recording of visual evoked potentials. The averaged sensor waveforms at multiple CPU speeds are presented and demonstrate suitability of the Bluetooth interface for portable ambulatory visual evoked potential implementation on our CAMAS platform.

  16. Semantics by analogy for illustrative volume visualization☆

    PubMed Central

    Gerl, Moritz; Rautek, Peter; Isenberg, Tobias; Gröller, Eduard

    2012-01-01

    We present an interactive graphical approach for the explicit specification of semantics for volume visualization. This explicit and graphical specification of semantics for volumetric features allows us to visually assign meaning to both input and output parameters of the visualization mapping. This is in contrast to the implicit way of specifying semantics using transfer functions. In particular, we demonstrate how to realize a dynamic specification of semantics which allows to flexibly explore a wide range of mappings. Our approach is based on three concepts. First, we use semantic shader augmentation to automatically add rule-based rendering functionality to static visualization mappings in a shader program, while preserving the visual abstraction that the initial shader encodes. With this technique we extend recent developments that define a mapping between data attributes and visual attributes with rules, which are evaluated using fuzzy logic. Second, we let users define the semantics by analogy through brushing on renderings of the data attributes of interest. Third, the rules are specified graphically in an interface that provides visual clues for potential modifications. Together, the presented methods offer a high degree of freedom in the specification and exploration of rule-based mappings and avoid the limitations of a linguistic rule formulation. PMID:23576827

  17. Novel Texture-based Visualization Methods for High-dimensional Multi-field Data Sets

    DTIC Science & Technology

    2013-07-06

    project: In standard format showing authors, title, journal, issue, pages, and date, for each category list the following: b) papers published...visual- isation [18]. Novel image acquisition and simulation tech- niques have made is possible to record a large number of co-located data fields...function, structure, anatomical changes, metabolic activity, blood perfusion, and cellular re- modelling. In this paper we investigate texture-based

  18. Dynamic sequence analysis of a decision making task of multielement target tracking and its usage as a learning method

    NASA Astrophysics Data System (ADS)

    Kang, Ziho

    This dissertation is divided into four parts: 1) Development of effective methods for comparing visual scanning paths (or scanpaths) for a dynamic task of multiple moving targets, 2) application of the methods to compare the scanpaths of experts and novices for a conflict detection task of multiple aircraft on radar screen, 3) a post-hoc analysis of other eye movement characteristics of experts and novices, and 4) finding out whether the scanpaths of experts can be used to teach the novices. In order to compare experts' and novices' scanpaths, two methods are developed. The first proposed method is the matrix comparisons using the Mantel test. The second proposed method is the maximum transition-based agglomerative hierarchical clustering (MTAHC) where comparisons of multi-level visual groupings are held out. The matrix comparison method was useful for a small number of targets during the preliminary experiment, but turned out to be inapplicable to a realistic case when tens of aircraft were presented on screen; however, MTAHC was effective with large number of aircraft on screen. The experiments with experts and novices on the aircraft conflict detection task showed that their scanpaths are different. The MTAHC result was able to explicitly show how experts visually grouped multiple aircraft based on similar altitudes while novices tended to group them based on convergence. Also, the MTAHC results showed that novices paid much attention to the converging aircraft groups even if they are safely separated by altitude; therefore, less attention was given to the actual conflicting pairs resulting in low correct conflict detection rates. Since the analysis showed the scanpath differences, experts' scanpaths were shown to novices in order to find out its effectiveness. The scanpath treatment group showed indications that they changed their visual movements from trajectory-based to altitude-based movements. Between the treatment and the non-treatment group, there were no significant differences in terms of number of correct detections; however, the treatment group made significantly fewer false alarms.

  19. Investigation of two methods to quantify noise in digital images based on the perception of the human eye

    NASA Astrophysics Data System (ADS)

    Kleinmann, Johanna; Wueller, Dietmar

    2007-01-01

    Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO 15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al 3 which simulates human vision approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000. With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different threshold images should get the same 'noise value'. The visual noise measurement model results in similar visual noise values for all the threshold images. The method is reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but with some limitations: for images which contain spatial structures besides the noise, the colour difference varies depending on the contrast of the spatial content.

  20. Visual enhancement of unmixed multispectral imagery using adaptive smoothing

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2004-01-01

    Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.

  1. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    PubMed

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  2. Biometric recognition via texture features of eye movement trajectories in a visual searching task

    PubMed Central

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383

  3. Visual detection of nucleic acids based on Mie scattering and the magnetophoretic effect.

    PubMed

    Zhao, Zichen; Chen, Shan; Ho, John Kin Lim; Chieng, Ching-Chang; Chen, Ting-Hsuan

    2015-12-07

    Visual detection of nucleic acid biomarkers is a simple and convenient approach to point-of-care applications. However, issues of sensitivity and the handling of complex bio-fluids have posed challenges. Here we report on a visual method detecting nucleic acids using Mie scattering of polystyrene microparticles and the magnetophoretic effect. Magnetic microparticles (MMPs) and polystyrene microparticles (PMPs) were surface-functionalised with oligonucleotide probes, which can hybridise with target oligonucleotides in juxtaposition and lead to the formation of MMPs-targets-PMPs sandwich structures. Using an externally applied magnetic field, the magnetophoretic effect attracts the sandwich structure to the sidewall, which reduces the suspended PMPs and leads to a change in the light transmission via the Mie scattering. Based on the high extinction coefficient of the Mie scattering (∼3 orders of magnitude greater than that of the commonly used gold nanoparticles), our results showed the limit of detection to be 4 pM using a UV-Vis spectrometer or 10 pM by direct visual inspection. Meanwhile, we also demonstrated that this method is compatible with multiplex assays and detection in complex bio-fluids, such as whole blood or a pool of nucleic acids, without purification in advance. With a simplified operation procedure, low instrumentation requirement, high sensitivity and compatibility with complex bio-fluids, this method provides an ideal solution for visual detection of nucleic acids in resource-limited settings.

  4. Assessment of visual landscape quality using IKONOS imagery.

    PubMed

    Ozkan, Ulas Yunus

    2014-07-01

    The assessment of visual landscape quality is of importance to the management of urban woodlands. Satellite remote sensing may be used for this purpose as a substitute for traditional survey techniques that are both labour-intensive and time-consuming. This study examines the association between the quality of the perceived visual landscape in urban woodlands and texture measures extracted from IKONOS satellite data, which features 4-m spatial resolution and four spectral bands. The study was conducted in the woodlands of Istanbul (the most important element of urban mosaic) lying along both shores of the Bosporus Strait. The visual quality assessment applied in this study is based on the perceptual approach and was performed via a survey of expressed preferences. For this purpose, representative photographs of real scenery were used to elicit observers' preferences. A slide show comprising 33 images was presented to a group of 153 volunteers (all undergraduate students), and they were asked to rate the visual quality of each on a 10-point scale (1 for very low visual quality, 10 for very high). Average visual quality scores were calculated for landscape. Texture measures were acquired using the two methods: pixel-based and object-based. Pixel-based texture measures were extracted from the first principle component (PC1) image. Object-based texture measures were extracted by using the original four bands. The association between image texture measures and perceived visual landscape quality was tested via Pearson's correlation coefficient. The analysis found a strong linear association between image texture measures and visual quality. The highest correlation coefficient was calculated between standard deviation of gray levels (SDGL) (one of the pixel-based texture measures) and visual quality (r = 0.82, P < 0.05). The results showed that perceived visual quality of urban woodland landscapes can be estimated by using texture measures extracted from satellite data in combination with appropriate modelling techniques.

  5. Using Interactive Data Visualizations for Exploratory Analysis in Undergraduate Genomics Coursework: Field Study Findings and Guidelines

    NASA Astrophysics Data System (ADS)

    Mirel, Barbara; Kumar, Anuj; Nong, Paige; Su, Gang; Meng, Fan

    2016-02-01

    Life scientists increasingly use visual analytics to explore large data sets and generate hypotheses. Undergraduate biology majors should be learning these same methods. Yet visual analytics is one of the most underdeveloped areas of undergraduate biology education. This study sought to determine the feasibility of undergraduate biology majors conducting exploratory analysis using the same interactive data visualizations as practicing scientists. We examined 22 upper level undergraduates in a genomics course as they engaged in a case-based inquiry with an interactive heat map. We qualitatively and quantitatively analyzed students' visual analytic behaviors, reasoning and outcomes to identify student performance patterns, commonly shared efficiencies and task completion. We analyzed students' successes and difficulties in applying knowledge and skills relevant to the visual analytics case and related gaps in knowledge and skill to associated tool designs. Findings show that undergraduate engagement in visual analytics is feasible and could be further strengthened through tool usability improvements. We identify these improvements. We speculate, as well, on instructional considerations that our findings suggested may also enhance visual analytics in case-based modules.

  6. Using Interactive Data Visualizations for Exploratory Analysis in Undergraduate Genomics Coursework: Field Study Findings and Guidelines

    PubMed Central

    Kumar, Anuj; Nong, Paige; Su, Gang; Meng, Fan

    2016-01-01

    Life scientists increasingly use visual analytics to explore large data sets and generate hypotheses. Undergraduate biology majors should be learning these same methods. Yet visual analytics is one of the most underdeveloped areas of undergraduate biology education. This study sought to determine the feasibility of undergraduate biology majors conducting exploratory analysis using the same interactive data visualizations as practicing scientists. We examined 22 upper level undergraduates in a genomics course as they engaged in a case-based inquiry with an interactive heat map. We qualitatively and quantitatively analyzed students’ visual analytic behaviors, reasoning and outcomes to identify student performance patterns, commonly shared efficiencies and task completion. We analyzed students’ successes and difficulties in applying knowledge and skills relevant to the visual analytics case and related gaps in knowledge and skill to associated tool designs. Findings show that undergraduate engagement in visual analytics is feasible and could be further strengthened through tool usability improvements. We identify these improvements. We speculate, as well, on instructional considerations that our findings suggested may also enhance visual analytics in case-based modules. PMID:26877625

  7. Literature review of visual representation of the results of benefit-risk assessments of medicinal products.

    PubMed

    Hallgreen, Christine E; Mt-Isa, Shahrul; Lieftucht, Alfons; Phillips, Lawrence D; Hughes, Diana; Talbot, Susan; Asiimwe, Alex; Downey, Gerald; Genov, Georgy; Hermann, Richard; Noel, Rebecca; Peters, Ruth; Micaleff, Alain; Tzoulaki, Ioanna; Ashby, Deborah

    2016-03-01

    The PROTECT Benefit-Risk group is dedicated to research in methods for continuous benefit-risk monitoring of medicines, including the presentation of the results, with a particular emphasis on graphical methods. A comprehensive review was performed to identify visuals used for medical risk and benefit-risk communication. The identified visual displays were grouped into visual types, and each visual type was appraised based on five criteria: intended audience, intended message, knowledge required to understand the visual, unintentional messages that may be derived from the visual and missing information that may be needed to understand the visual. Sixty-six examples of visual formats were identified from the literature and classified into 14 visual types. We found that there is not one single visual format that is consistently superior to others for the communication of benefit-risk information. In addition, we found that most of the drawbacks found in the visual formats could be considered general to visual communication, although some appear more relevant to specific formats and should be considered when creating visuals for different audiences depending on the exact message to be communicated. We have arrived at recommendations for the use of visual displays for benefit-risk communication. The recommendation refers to the creation of visuals. We outline four criteria to determine audience-visual compatibility and consider these to be a key task in creating any visual. Next we propose specific visual formats of interest, to be explored further for their ability to address nine different types of benefit-risk analysis information. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Applying open source data visualization tools to standard based medical data.

    PubMed

    Kopanitsa, Georgy; Taranik, Maxim

    2014-01-01

    Presentation of medical data in personal health records (PHRs) requires flexible platform independent tools to ensure easy access to the information. Different backgrounds of the patients, especially elder people require simple graphical presentation of the data. Data in PHRs can be collected from heterogeneous sources. Application of standard based medical data allows development of generic visualization methods. Focusing on the deployment of Open Source Tools, in this paper we applied Java Script libraries to create data presentations for standard based medical data.

  9. Sexing sirenians: Validation of visual and molecular sex determination in both wild dugongs (Dugong dugon) and Florida manatees (Trichechus manatus latirostris)

    USGS Publications Warehouse

    Lanyon, J.M.; Sneath, H.L.; Ovenden, J.R.; Broderick, D.; Bonde, R.K.

    2009-01-01

    Sexing wild marine mammals that show little to no sexual dimorphism is challenging. For sirenians that are difficult to catch or approach closely, molecular sexing from tissue biopsies offers an alternative method to visual discrimination. This paper reports the results of a field study to validate the use of two sexing methods: (1) visual discrimination of sex vs (2) molecular sexing based on a multiplex PCR assay which amplifies the male-specific SRY gene and differentiates ZFX and ZFY gametologues. Skin samples from 628 dugongs (Dugong dugon) and 100 Florida manatees (Trichechus manatus latirostris) were analysed and assigned as male or female based on molecular sex. These individuals were also assigned a sex based on either direct observation of the genitalia and/or the association of the individual with a calf. Individuals of both species showed 93 to 96% congruence between visual and molecular sexing. For the remaining 4 to 7%, the discrepancies could be explained by human error. To mitigate this error rate, we recommend using both of these robust techniques, with routine inclusion of sex primers into microsatellite panels employed for identity, along with trained field observers and stringent sample handling.

  10. TRAPR: R Package for Statistical Analysis and Visualization of RNA-Seq Data.

    PubMed

    Lim, Jae Hyun; Lee, Soo Youn; Kim, Ju Han

    2017-03-01

    High-throughput transcriptome sequencing, also known as RNA sequencing (RNA-Seq), is a standard technology for measuring gene expression with unprecedented accuracy. Numerous bioconductor packages have been developed for the statistical analysis of RNA-Seq data. However, these tools focus on specific aspects of the data analysis pipeline, and are difficult to appropriately integrate with one another due to their disparate data structures and processing methods. They also lack visualization methods to confirm the integrity of the data and the process. In this paper, we propose an R-based RNA-Seq analysis pipeline called TRAPR, an integrated tool that facilitates the statistical analysis and visualization of RNA-Seq expression data. TRAPR provides various functions for data management, the filtering of low-quality data, normalization, transformation, statistical analysis, data visualization, and result visualization that allow researchers to build customized analysis pipelines.

  11. Cross-Modal Retrieval With CNN Visual Features: A New Baseline.

    PubMed

    Wei, Yunchao; Zhao, Yao; Lu, Canyi; Wei, Shikui; Liu, Luoqi; Zhu, Zhenfeng; Yan, Shuicheng

    2017-02-01

    Recently, convolutional neural network (CNN) visual features have demonstrated their powerful ability as a universal representation for various recognition tasks. In this paper, cross-modal retrieval with CNN visual features is implemented with several classic methods. Specifically, off-the-shelf CNN visual features are extracted from the CNN model, which is pretrained on ImageNet with more than one million images from 1000 object categories, as a generic image representation to tackle cross-modal retrieval. To further enhance the representational ability of CNN visual features, based on the pretrained CNN model on ImageNet, a fine-tuning step is performed by using the open source Caffe CNN library for each target data set. Besides, we propose a deep semantic matching method to address the cross-modal retrieval problem with respect to samples which are annotated with one or multiple labels. Extensive experiments on five popular publicly available data sets well demonstrate the superiority of CNN visual features for cross-modal retrieval.

  12. Making Visual Arts Learning Visible in a Generalist Elementary School Classroom

    ERIC Educational Resources Information Center

    Wright, Susan; Watkins, Marnee; Grant, Gina

    2017-01-01

    This article presents the story of one elementary school teacher's shift in art praxis through her involvement in a research project aimed at facilitating participatory arts-based communities of practice. Qualitative methods and social constructivism informed Professional Learning Interventions (PLIs) involving: (1) a visual arts workshop, (2)…

  13. Methods & Strategies: Unlocking the Power of Visual Communication

    ERIC Educational Resources Information Center

    Coleman, Julianne; McTigue, Erin

    2013-01-01

    This article reports on the usage of Interactive read-alouds to help students decode science diagrams and other visual information. Three short vignettes are featured from a second-grade teacher, illustrating the research-based recommendations for introducing students to the graphics of science within an authentic classroom activity--the…

  14. A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image

    NASA Astrophysics Data System (ADS)

    Barat, Christian; Phlypo, Ronald

    2010-12-01

    We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.

  15. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  16. A novel anisotropic fast marching method and its application to blood flow computation in phase-contrast MRI.

    PubMed

    Schwenke, M; Hennemuth, A; Fischer, B; Friman, O

    2012-01-01

    Phase-contrast MRI (PC MRI) can be used to assess blood flow dynamics noninvasively inside the human body. The acquired images can be reconstructed into flow vector fields. Traditionally, streamlines can be computed based on the vector fields to visualize flow patterns and particle trajectories. The traditional methods may give a false impression of precision, as they do not consider the measurement uncertainty in the PC MRI images. In our prior work, we incorporated the uncertainty of the measurement into the computation of particle trajectories. As a major part of the contribution, a novel numerical scheme for solving the anisotropic Fast Marching problem is presented. A computing time comparison to state-of-the-art methods is conducted on artificial tensor fields. A visual comparison of healthy to pathological blood flow patterns is given. The comparison shows that the novel anisotropic Fast Marching solver outperforms previous schemes in terms of computing time. The visual comparison of flow patterns directly visualizes large deviations of pathological flow from healthy flow. The novel anisotropic Fast Marching solver efficiently resolves even strongly anisotropic path costs. The visualization method enables the user to assess the uncertainty of particle trajectories derived from PC MRI images.

  17. Stereoscopic visual fatigue assessment and modeling

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Wang, Tingting; Gong, Yue

    2014-03-01

    Evaluation of stereoscopic visual fatigue is one of the focuses in the user experience research. It is measured in either subjective or objective methods. Objective measures are more preferred for their capability to quantify the degree of human visual fatigue without being affected by individual variation. However, little research has been conducted on the integration of objective indicators, or the sensibility of each objective indicator in reflecting subjective fatigue. The paper proposes a simply effective method to evaluate visual fatigue more objectively. The stereoscopic viewing process is divided into series of sessions, after each of which viewers rate their visual fatigue with subjective scores (SS) according to a five-grading scale, followed by tests of the punctum maximum accommodation (PMA) and visual reaction time (VRT). Throughout the entire viewing process, their eye movements are recorded by an infrared camera. The pupil size (PS) and percentage of eyelid closure over the pupil over time (PERCLOS) are extracted from the videos processed by the algorithm. Based on the method, an experiment with 14 subjects was conducted to assess visual fatigue induced by 3D images on polarized 3D display. The experiment consisted of 10 sessions (5min per session), each containing the same 75 images displayed randomly. The results show that PMA, VRT and PERCLOS are the most efficient indicators of subjective visual fatigue and finally a predictive model is derived from the stepwise multiple regressions.

  18. Diffusion-Based Density-Equalizing Maps: an Interdisciplinary Approach to Visualizing Homicide Rates and Other Georeferenced Statistical Data

    NASA Astrophysics Data System (ADS)

    Mazzitello, Karina I.; Candia, Julián

    2012-12-01

    In every country, public and private agencies allocate extensive funding to collect large-scale statistical data, which in turn are studied and analyzed in order to determine local, regional, national, and international policies regarding all aspects relevant to the welfare of society. One important aspect of that process is the visualization of statistical data with embedded geographical information, which most often relies on archaic methods such as maps colored according to graded scales. In this work, we apply nonstandard visualization techniques based on physical principles. We illustrate the method with recent statistics on homicide rates in Brazil and their correlation to other publicly available data. This physics-based approach provides a novel tool that can be used by interdisciplinary teams investigating statistics and model projections in a variety of fields such as economics and gross domestic product research, public health and epidemiology, sociodemographics, political science, business and marketing, and many others.

  19. GPU-based Efficient Realistic Techniques for Bleeding and Smoke Generation in Surgical Simulators

    PubMed Central

    Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu

    2010-01-01

    Background In actual surgery, smoke and bleeding due to cautery processes, provide important visual cues to the surgeon which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated effects of bleeding and smoke generation, they are not realistic due to the requirement of real time performance. To be interactive, visual update must be performed at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques since other computationally intensive processes compete for the available CPU resources. Methods In this work, we develop a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. Results The smoke and bleeding simulation were implemented as part of a Laparoscopic Adjustable Gastric Banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur in noticeable overhead. However, for smoke generation, an I/O (Input/Output) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Conclusions Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited in VR-based surgical simulators. PMID:20878651

  20. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    NASA Astrophysics Data System (ADS)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  1. Interactive and coordinated visualization approaches for biological data analysis.

    PubMed

    Cruz, António; Arrais, Joel P; Machado, Penousal

    2018-03-26

    The field of computational biology has become largely dependent on data visualization tools to analyze the increasing quantities of data gathered through the use of new and growing technologies. Aside from the volume, which often results in large amounts of noise and complex relationships with no clear structure, the visualization of biological data sets is hindered by their heterogeneity, as data are obtained from different sources and contain a wide variety of attributes, including spatial and temporal information. This requires visualization approaches that are able to not only represent various data structures simultaneously but also provide exploratory methods that allow the identification of meaningful relationships that would not be perceptible through data analysis algorithms alone. In this article, we present a survey of visualization approaches applied to the analysis of biological data. We focus on graph-based visualizations and tools that use coordinated multiple views to represent high-dimensional multivariate data, in particular time series gene expression, protein-protein interaction networks and biological pathways. We then discuss how these methods can be used to help solve the current challenges surrounding the visualization of complex biological data sets.

  2. A robust pointer segmentation in biomedical images toward building a visual ontology for biomedical article retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Pointers (arrows and symbols) are frequently used in biomedical images to highlight specific image regions of interest (ROIs) that are mentioned in figure captions and/or text discussion. Detection of pointers is the first step toward extracting relevant visual features from ROIs and combining them with textual descriptions for a multimodal (text and image) biomedical article retrieval system. Recently we developed a pointer recognition algorithm based on an edge-based pointer segmentation method, and subsequently reported improvements made on our initial approach involving the use of Active Shape Models (ASM) for pointer recognition and region growing-based method for pointer segmentation. These methods contributed to improving the recall of pointer recognition but not much to the precision. The method discussed in this article is our recent effort to improve the precision rate. Evaluation performed on two datasets and compared with other pointer segmentation methods show significantly improved precision and the highest F1 score.

  3. Selecting islands and shoals for conservation based on biological and aesthetic criteria

    USGS Publications Warehouse

    Knutson, M.G.; Leopold, D.J.; Smardon, R.C.

    1993-01-01

    Consideration of biological quality has long been an important component of rating areas for conservation. Often these same areas are highly valued by people for aesthetic reasons, creating demands for housing and recreation that may conflict with protection plans for these habitats. Most methods of selecting land for conservation purposes use biological factors alone. For some land areas, analysis of aesthetic qualities is also important in describing the scenic value of undisturbed land. A method for prioritizing small islands and shoals based on both biological and visual quality factors is presented here. The study included 169 undeveloped islands and shoals a??0.8 ha in the Thousand Islands Region of the St. Lawrence River, New York. Criteria such as critical habitat for uncommon plant and animal species were considered together with visual quality and incorporated into a rating system that ranked the islands and shoals according to their priority for conservation management and protection from development. Biological factors were determined based on previous research and a field survey. Visual quality was determined by visual diagnostic criteria developed from public responses to photographs of a sample of islands. Variables such as elevation, soil depth, and type of plant community can be used to classify islands into different categories of visual quality but are unsuccessful in classifying islands into categories of overall biological quality.

  4. Generational Learning Style Preferences Based on Computer-Based Healthcare Training

    ERIC Educational Resources Information Center

    Knight, Michaelle H.

    2016-01-01

    Purpose. The purpose of this mixed-method study was to determine the degree of perceived differences for auditory, visual and kinesthetic learning styles of Traditionalist, Baby Boomers, Generation X and Millennial generational healthcare workers participating in technology-assisted healthcare training. Methodology. This mixed-method research…

  5. Regional Principal Color Based Saliency Detection

    PubMed Central

    Lou, Jing; Ren, Mingwu; Wang, Huan

    2014-01-01

    Saliency detection is widely used in many visual applications like image segmentation, object recognition and classification. In this paper, we will introduce a new method to detect salient objects in natural images. The approach is based on a regional principal color contrast modal, which incorporates low-level and medium-level visual cues. The method allows a simple computation of color features and two categories of spatial relationships to a saliency map, achieving higher F-measure rates. At the same time, we present an interpolation approach to evaluate resulting curves, and analyze parameters selection. Our method enables the effective computation of arbitrary resolution images. Experimental results on a saliency database show that our approach produces high quality saliency maps and performs favorably against ten saliency detection algorithms. PMID:25379960

  6. Automatic transfer function design for medical visualization using visibility distributions and projective color mapping.

    PubMed

    Cai, Lile; Tay, Wei-Liang; Nguyen, Binh P; Chui, Chee-Kong; Ong, Sim-Heng

    2013-01-01

    Transfer functions play a key role in volume rendering of medical data, but transfer function manipulation is unintuitive and can be time-consuming; achieving an optimal visualization of patient anatomy or pathology is difficult. To overcome this problem, we present a system for automatic transfer function design based on visibility distribution and projective color mapping. Instead of assigning opacity directly based on voxel intensity and gradient magnitude, the opacity transfer function is automatically derived by matching the observed visibility distribution to a target visibility distribution. An automatic color assignment scheme based on projective mapping is proposed to assign colors that allow for the visual discrimination of different structures, while also reflecting the degree of similarity between them. When our method was tested on several medical volumetric datasets, the key structures within the volume were clearly visualized with minimal user intervention. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Evaluation of in-vehicle HMI using occlusion techniques: experimental results and practical implications.

    PubMed

    Baumann, Martin; Keinath, Andreas; Krems, Josef F; Bengler, Klaus

    2004-05-01

    Despite the usefulness of new on-board information systems one has to be concerned about the potential distraction effects that they impose on the driver. Therefore, methods and procedures are necessary to assess the visual demand that is connected to the usage of an on-board system. The occlusion-method is considered a strong candidate as a procedure for evaluating display designs with regard to their visual demand. This paper reports results from two experimental studies conducted to further evaluate this method. In the first study, performance in using an in-car navigation system was measured under three conditions: static (parking lot), occlusion (shutter glasses), and driving. The results show that the occlusion-procedure can be used to simulate visual requirements of real traffic conditions. In a second study the occlusion method was compared to a global evaluation criterion based on the total task time. It can be demonstrated that the occlusion method can identify tasks which meet this criterion, but are yet irresolvable under driving conditions. It is concluded that the occlusion technique seems to be a reliable and valid method for evaluating visual and dialogue aspects of in-car information systems.

  8. Overlay design method based on visual pavement distress.

    DOT National Transportation Integrated Search

    1978-01-01

    A method for designing the thickness of overlays for bituminous concrete pavements in Virginia is described. In this method the thickness is calculated by rating the amount and severity of observed pavement distress and determining the total accumula...

  9. Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.

    PubMed

    Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo

    2015-12-01

    In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.

  10. Towards human-controlled, real-time shape sensing based flexible needle steering for MRI-guided percutaneous therapies.

    PubMed

    Li, Meng; Li, Gang; Gonenc, Berk; Duan, Xingguang; Iordachita, Iulian

    2017-06-01

    Accurate needle placement into soft tissue is essential to percutaneous prostate cancer diagnosis and treatment procedures. This paper discusses the steering of a 20 gauge (G) FBG-integrated needle with three sets of Fiber Bragg Grating (FBG) sensors. A fourth-order polynomial shape reconstruction method is introduced and compared with previous approaches. To control the needle, a bicycle model based navigation method is developed to provide visual guidance lines for clinicians. A real-time model updating method is proposed for needle steering inside inhomogeneous tissue. A series of experiments were performed to evaluate the proposed needle shape reconstruction, visual guidance and real-time model updating methods. Targeting experiments were performed in soft plastic phantoms and in vitro tissues with insertion depths ranging between 90 and 120 mm. Average targeting errors calculated based upon the acquired camera images were 0.40 ± 0.35 mm in homogeneous plastic phantoms, 0.61 ± 0.45 mm in multilayer plastic phantoms and 0.69 ± 0.25 mm in ex vivo tissue. Results endorse the feasibility and accuracy of the needle shape reconstruction and visual guidance methods developed in this work. The approach implemented for the multilayer phantom study could facilitate accurate needle placement efforts in real inhomogeneous tissues. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Torque-onset determination: Unintended consequences of the threshold method.

    PubMed

    Dotan, Raffy; Jenkins, Glenn; O'Brien, Thomas D; Hansen, Steve; Falk, Bareket

    2016-12-01

    Compared with visual torque-onset-detection (TOD), threshold-based TOD produces onset bias, which increases with lower torques or rates of torque development (RTD). To compare the effects of differential TOD-bias on common contractile parameters in two torque-disparate groups. Fifteen boys and 12 men performed maximal, explosive, isometric knee-extensions. Torque and EMG were recorded for each contraction. Best contractions were selected by peak torque (MVC) and peak RTD. Visual-TOD-based torque-time traces, electromechanical delays (EMD), and times to peak RTD (tRTD) were compared with corresponding data derived from fixed 4-Nm- and relative 5%MVC-thresholds. The 5%MVC TOD-biases were similar for boys and men, but the corresponding 4-Nm-based biases were markedly different (40.3±14.1 vs. 18.4±7.1ms, respectively; p<0.001). Boys-men EMD differences were most affected, increasing from 5.0ms (visual) to 26.9ms (4Nm; p<0.01). Men's visually-based torque kinetics tended to be faster than the boys' (NS), but the 4-Nm-based kinetics erroneously depicted the boys as being much faster to any given %MVC (p<0.001). When comparing contractile properties of dissimilar groups, e.g., children vs. adults, threshold-based TOD methods can misrepresent reality and lead to erroneous conclusions. Relative-thresholds (e.g., 5% MVC) still introduce error, but group-comparisons are not confounded. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Automatic annotation of histopathological images using a latent topic model based on non-negative matrix factorization

    PubMed Central

    Cruz-Roa, Angel; Díaz, Gloria; Romero, Eduardo; González, Fabio A.

    2011-01-01

    Histopathological images are an important resource for clinical diagnosis and biomedical research. From an image understanding point of view, the automatic annotation of these images is a challenging problem. This paper presents a new method for automatic histopathological image annotation based on three complementary strategies, first, a part-based image representation, called the bag of features, which takes advantage of the natural redundancy of histopathological images for capturing the fundamental patterns of biological structures, second, a latent topic model, based on non-negative matrix factorization, which captures the high-level visual patterns hidden in the image, and, third, a probabilistic annotation model that links visual appearance of morphological and architectural features associated to 10 histopathological image annotations. The method was evaluated using 1,604 annotated images of skin tissues, which included normal and pathological architectural and morphological features, obtaining a recall of 74% and a precision of 50%, which improved a baseline annotation method based on support vector machines in a 64% and 24%, respectively. PMID:22811960

  13. Different Strokes for Different Folks: Visual Presentation Design between Disciplines

    PubMed Central

    Gomez, Steven R.; Jianu, Radu; Ziemkiewicz, Caroline; Guo, Hua; Laidlaw, David H.

    2015-01-01

    We present an ethnographic study of design differences in visual presentations between academic disciplines. Characterizing design conventions between users and data domains is an important step in developing hypotheses, tools, and design guidelines for information visualization. In this paper, disciplines are compared at a coarse scale between four groups of fields: social, natural, and formal sciences; and the humanities. Two commonplace presentation types were analyzed: electronic slideshows and whiteboard “chalk talks”. We found design differences in slideshows using two methods – coding and comparing manually-selected features, like charts and diagrams, and an image-based analysis using PCA called eigenslides. In whiteboard talks with controlled topics, we observed design behaviors, including using representations and formalisms from a participant’s own discipline, that suggest authors might benefit from novel assistive tools for designing presentations. Based on these findings, we discuss opportunities for visualization ethnography and human-centered authoring tools for visual information. PMID:26357149

  14. Different Strokes for Different Folks: Visual Presentation Design between Disciplines.

    PubMed

    Gomez, S R; Jianu, R; Ziemkiewicz, C; Guo, Hua; Laidlaw, D H

    2012-12-01

    We present an ethnographic study of design differences in visual presentations between academic disciplines. Characterizing design conventions between users and data domains is an important step in developing hypotheses, tools, and design guidelines for information visualization. In this paper, disciplines are compared at a coarse scale between four groups of fields: social, natural, and formal sciences; and the humanities. Two commonplace presentation types were analyzed: electronic slideshows and whiteboard "chalk talks". We found design differences in slideshows using two methods - coding and comparing manually-selected features, like charts and diagrams, and an image-based analysis using PCA called eigenslides. In whiteboard talks with controlled topics, we observed design behaviors, including using representations and formalisms from a participant's own discipline, that suggest authors might benefit from novel assistive tools for designing presentations. Based on these findings, we discuss opportunities for visualization ethnography and human-centered authoring tools for visual information.

  15. In-Gel Determination of L-Amino Acid Oxidase Activity Based on the Visualization of Prussian Blue-Forming Reaction

    PubMed Central

    Zhou, Ning; Zhao, Chuntian

    2013-01-01

    L-amino acid oxidase (LAAO) is attracting increasing attention due to its important functions. Diverse detection methods with their own properties have been developed for characterization of LAAO. In the present study, a simple, rapid, sensitive, cost-effective and reproducible method for quantitative in-gel determination of LAAO activity based on the visualization of Prussian blue-forming reaction is described. Coupled with SDS-PAGE, this Prussian blue agar assay can be directly used to determine the numbers and approximate molecular weights of LAAO in one step, allowing straightforward application for purification and sequence identification of LAAO from diverse samples. PMID:23383337

  16. Content-Based Medical Image Retrieval

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Deserno, Thomas M.

    This chapter details the necessity for alternative access concepts to the currently mainly text-based methods in medical information retrieval. This need is partly due to the large amount of visual data produced, the increasing variety of medical imaging data and changing user patterns. The stored visual data contain large amounts of unused information that, if well exploited, can help diagnosis, teaching and research. The chapter briefly reviews the history of image retrieval and its general methods before technologies that have been developed in the medical domain are focussed. We also discuss evaluation of medical content-based image retrieval (CBIR) systems and conclude with pointing out their strengths, gaps, and further developments. As examples, the MedGIFT project and the Image Retrieval in Medical Applications (IRMA) framework are presented.

  17. Objective automated quantification of fluorescence signal in histological sections of rat lens.

    PubMed

    Talebizadeh, Nooshin; Hagström, Nanna Zhou; Yu, Zhaohua; Kronschläger, Martin; Söderberg, Per; Wählby, Carolina

    2017-08-01

    Visual quantification and classification of fluorescent signals is the gold standard in microscopy. The purpose of this study was to develop an automated method to delineate cells and to quantify expression of fluorescent signal of biomarkers in each nucleus and cytoplasm of lens epithelial cells in a histological section. A region of interest representing the lens epithelium was manually demarcated in each input image. Thereafter, individual cell nuclei within the region of interest were automatically delineated based on watershed segmentation and thresholding with an algorithm developed in Matlab™. Fluorescence signal was quantified within nuclei, cytoplasms and juxtaposed backgrounds. The classification of cells as labelled or not labelled was based on comparison of the fluorescence signal within cells with local background. The classification rule was thereafter optimized as compared with visual classification of a limited dataset. The performance of the automated classification was evaluated by asking 11 independent blinded observers to classify all cells (n = 395) in one lens image. Time consumed by the automatic algorithm and visual classification of cells was recorded. On an average, 77% of the cells were correctly classified as compared with the majority vote of the visual observers. The average agreement among visual observers was 83%. However, variation among visual observers was high, and agreement between two visual observers was as low as 71% in the worst case. Automated classification was on average 10 times faster than visual scoring. The presented method enables objective and fast detection of lens epithelial cells and quantification of expression of fluorescent signal with an accuracy comparable with the variability among visual observers. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  18. Noise removing in encrypted color images by statistical analysis

    NASA Astrophysics Data System (ADS)

    Islam, N.; Puech, W.

    2012-03-01

    Cryptographic techniques are used to secure confidential data from unauthorized access but these techniques are very sensitive to noise. A single bit change in encrypted data can have catastrophic impact over the decrypted data. This paper addresses the problem of removing bit error in visual data which are encrypted using AES algorithm in the CBC mode. In order to remove the noise, a method is proposed which is based on the statistical analysis of each block during the decryption. The proposed method exploits local statistics of the visual data and confusion/diffusion properties of the encryption algorithm to remove the errors. Experimental results show that the proposed method can be used at the receiving end for the possible solution for noise removing in visual data in encrypted domain.

  19. A visual tracking method based on deep learning without online model updating

    NASA Astrophysics Data System (ADS)

    Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei

    2018-02-01

    The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.

  20. Instruction-based clinical eye-tracking study on the visual interpretation of divergence: How do students look at vector field plots?

    NASA Astrophysics Data System (ADS)

    Klein, P.; Viiri, J.; Mozaffari, S.; Dengel, A.; Kuhn, J.

    2018-06-01

    Relating mathematical concepts to graphical representations is a challenging task for students. In this paper, we introduce two visual strategies to qualitatively interpret the divergence of graphical vector field representations. One strategy is based on the graphical interpretation of partial derivatives, while the other is based on the flux concept. We test the effectiveness of both strategies in an instruction-based eye-tracking study with N =41 physics majors. We found that students' performance improved when both strategies were introduced (74% correct) instead of only one strategy (64% correct), and students performed best when they were free to choose between the two strategies (88% correct). This finding supports the idea of introducing multiple representations of a physical concept to foster student understanding. Relevant eye-tracking measures demonstrate that both strategies imply different visual processing of the vector field plots, therefore reflecting conceptual differences between the strategies. Advanced analysis methods further reveal significant differences in eye movements between the best and worst performing students. For instance, the best students performed predominantly horizontal and vertical saccades, indicating correct interpretation of partial derivatives. They also focused on smaller regions when they balanced positive and negative flux. This mixed-method research leads to new insights into student visual processing of vector field representations, highlights the advantages and limitations of eye-tracking methodologies in this context, and discusses implications for teaching and for future research. The introduction of saccadic direction analysis expands traditional methods, and shows the potential to discover new insights into student understanding and learning difficulties.

  1. Image-Based Participatory Pedagogies: Reimagining Social Justice

    ERIC Educational Resources Information Center

    Powell, Kimberly; Serriere, Stephanie

    2013-01-01

    As educators and scholars in social studies and art education respectively, we describe two visual methods from our own research and teaching in pre-K to university settings that are embedded in visual practices. We underscore their transformative potential by using Maxine Greene's (1995) ideas of the education of perception as a critical means…

  2. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  3. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing

    NASA Astrophysics Data System (ADS)

    Ou, Meiying; Li, Shihua; Wang, Chaoli

    2013-12-01

    This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.

  4. Probe Scanning Support System by a Parallel Mechanism for Robotic Echography

    NASA Astrophysics Data System (ADS)

    Aoki, Yusuke; Kaneko, Kenta; Oyamada, Masami; Takachi, Yuuki; Masuda, Kohji

    We propose a probe scanning support system based on force/visual servoing control for robotic echography. First, we have designed and formulated its inverse kinematics the construction of mechanism. Next, we have developed a scanning method of the ultrasound probe on body surface to construct visual servo system based on acquired echogram by the standalone medical robot to move the ultrasound probe on patient abdomen in three-dimension. The visual servo system detects local change of brightness in time series echogram, which is stabilized the position of the probe by conventional force servo system in the robot, to compensate not only periodical respiration motion but also body motion. Then we integrated control method of the visual servo with the force servo as a hybrid control in both of position and force. To confirm the ability to apply for actual abdomen, we experimented the total system to follow the gallbladder as a moving target to keep its position in the echogram by minimizing variation of reaction force on abdomen. As the result, the system has a potential to be applied to automatic detection of human internal organ.

  5. Statistical modeling for visualization evaluation through data fusion.

    PubMed

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  7. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  8. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  9. [Research on Three-dimensional Medical Image Reconstruction and Interaction Based on HTML5 and Visualization Toolkit].

    PubMed

    Gao, Peng; Liu, Peng; Su, Hongsen; Qiao, Liang

    2015-04-01

    Integrating visualization toolkit and the capability of interaction, bidirectional communication and graphics rendering which provided by HTML5, we explored and experimented on the feasibility of remote medical image reconstruction and interaction in pure Web. We prompted server-centric method which did not need to download the big medical data to local connections and avoided considering network transmission pressure and the three-dimensional (3D) rendering capability of client hardware. The method integrated remote medical image reconstruction and interaction into Web seamlessly, which was applicable to lower-end computers and mobile devices. Finally, we tested this method in the Internet and achieved real-time effects. This Web-based 3D reconstruction and interaction method, which crosses over internet terminals and performance limited devices, may be useful for remote medical assistant.

  10. An optimized web-based approach for collaborative stereoscopic medical visualization

    PubMed Central

    Kaspar, Mathias; Parsad, Nigel M; Silverstein, Jonathan C

    2013-01-01

    Objective Medical visualization tools have traditionally been constrained to tethered imaging workstations or proprietary client viewers, typically part of hospital radiology systems. To improve accessibility to real-time, remote, interactive, stereoscopic visualization and to enable collaboration among multiple viewing locations, we developed an open source approach requiring only a standard web browser with no added client-side software. Materials and Methods Our collaborative, web-based, stereoscopic, visualization system, CoWebViz, has been used successfully for the past 2 years at the University of Chicago to teach immersive virtual anatomy classes. It is a server application that streams server-side visualization applications to client front-ends, comprised solely of a standard web browser with no added software. Results We describe optimization considerations, usability, and performance results, which make CoWebViz practical for broad clinical use. We clarify technical advances including: enhanced threaded architecture, optimized visualization distribution algorithms, a wide range of supported stereoscopic presentation technologies, and the salient theoretical and empirical network parameters that affect our web-based visualization approach. Discussion The implementations demonstrate usability and performance benefits of a simple web-based approach for complex clinical visualization scenarios. Using this approach overcomes technical challenges that require third-party web browser plug-ins, resulting in the most lightweight client. Conclusions Compared to special software and hardware deployments, unmodified web browsers enhance remote user accessibility to interactive medical visualization. Whereas local hardware and software deployments may provide better interactivity than remote applications, our implementation demonstrates that a simplified, stable, client approach using standard web browsers is sufficient for high quality three-dimensional, stereoscopic, collaborative and interactive visualization. PMID:23048008

  11. Computer aided system for segmentation and visualization of microcalcifications in digital mammograms.

    PubMed

    Reljin, Branimir; Milosević, Zorica; Stojić, Tomislav; Reljin, Irini

    2009-01-01

    Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal "images" are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph.

  12. Performance enhancement for audio-visual speaker identification using dynamic facial muscle model.

    PubMed

    Asadpour, Vahid; Towhidkhah, Farzad; Homayounpour, Mohammad Mehdi

    2006-10-01

    Science of human identification using physiological characteristics or biometry has been of great concern in security systems. However, robust multimodal identification systems based on audio-visual information has not been thoroughly investigated yet. Therefore, the aim of this work to propose a model-based feature extraction method which employs physiological characteristics of facial muscles producing lip movements. This approach adopts the intrinsic properties of muscles such as viscosity, elasticity, and mass which are extracted from the dynamic lip model. These parameters are exclusively dependent on the neuro-muscular properties of speaker; consequently, imitation of valid speakers could be reduced to a large extent. These parameters are applied to a hidden Markov model (HMM) audio-visual identification system. In this work, a combination of audio and video features has been employed by adopting a multistream pseudo-synchronized HMM training method. Noise robust audio features such as Mel-frequency cepstral coefficients (MFCC), spectral subtraction (SS), and relative spectra perceptual linear prediction (J-RASTA-PLP) have been used to evaluate the performance of the multimodal system once efficient audio feature extraction methods have been utilized. The superior performance of the proposed system is demonstrated on a large multispeaker database of continuously spoken digits, along with a sentence that is phonetically rich. To evaluate the robustness of algorithms, some experiments were performed on genetically identical twins. Furthermore, changes in speaker voice were simulated with drug inhalation tests. In 3 dB signal to noise ratio (SNR), the dynamic muscle model improved the identification rate of the audio-visual system from 91 to 98%. Results on identical twins revealed that there was an apparent improvement on the performance for the dynamic muscle model-based system, in which the identification rate of the audio-visual system was enhanced from 87 to 96%.

  13. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter.

    PubMed

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-17

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor's stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  14. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter

    PubMed Central

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-01

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor’s stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity. PMID:28106716

  15. High-quality and interactive animations of 3D time-varying vector fields.

    PubMed

    Helgeland, Anders; Elboth, Thomas

    2006-01-01

    In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.

  16. A procedure for Alcian blue staining of mucins on polyvinylidene difluoride membranes.

    PubMed

    Dong, Weijie; Matsuno, Yu-ki; Kameyama, Akihiko

    2012-10-16

    The isolation and characterization of mucins are critically important for obtaining insight into the molecular pathology of various diseases, including cancers and cystic fibrosis. Recently, we developed a novel membrane electrophoretic method, supported molecular matrix electrophoresis (SMME), which separates mucins on a polyvinylidene difluoride (PVDF) membrane impregnated with a hydrophilic polymer. Alcian blue staining is widely used to visualize mucopolysaccharides and acidic mucins on both blotted membranes and SMME membranes; however, this method cannot be used to stain mucins with a low acidic glycan content. Meanwhile, periodic acid-Schiff staining can selectively visualize glycoproteins, including mucins, but is incompatible with glycan analysis, which is indispensable for mucin characterizations. Here we describe a novel staining method, designated succinylation-Alcian blue staining, for visualizing mucins on a PVDF membrane. This method can visualize mucins regardless of the acidic residue content and shows a sensitivity 2-fold higher than that of Pro-Q Emerald 488, a fluorescent periodate Schiff-base stain. Furthermore, we demonstrate the compatibility of this novel staining procedure with glycan analysis using porcine gastric mucin as a model mucin.

  17. Visual tracking using objectness-bounding box regression and correlation filters

    NASA Astrophysics Data System (ADS)

    Mbelwa, Jimmy T.; Zhao, Qingjie; Lu, Yao; Wang, Fasheng; Mbise, Mercy

    2018-03-01

    Visual tracking is a fundamental problem in computer vision with extensive application domains in surveillance and intelligent systems. Recently, correlation filter-based tracking methods have shown a great achievement in terms of robustness, accuracy, and speed. However, such methods have a problem of dealing with fast motion (FM), motion blur (MB), illumination variation (IV), and drifting caused by occlusion (OCC). To solve this problem, a tracking method that integrates objectness-bounding box regression (O-BBR) model and a scheme based on kernelized correlation filter (KCF) is proposed. The scheme based on KCF is used to improve the tracking performance of FM and MB. For handling drift problem caused by OCC and IV, we propose objectness proposals trained in bounding box regression as prior knowledge to provide candidates and background suppression. Finally, scheme KCF as a base tracker and O-BBR are fused to obtain a state of a target object. Extensive experimental comparisons of the developed tracking method with other state-of-the-art trackers are performed on some of the challenging video sequences. Experimental comparison results show that our proposed tracking method outperforms other state-of-the-art tracking methods in terms of effectiveness, accuracy, and robustness.

  18. Unsupervised and self-mapping category formation and semantic object recognition for mobile robot vision used in an actual environment

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Tsukada, M.; Sato, K.

    2013-07-01

    This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.

  19. Chemical labelling for visualizing native AMPA receptors in live neurons

    PubMed Central

    Wakayama, Sho; Kiyonaka, Shigeki; Arai, Itaru; Kakegawa, Wataru; Matsuda, Shinji; Ibata, Keiji; Nemoto, Yuri L.; Kusumi, Akihiro; Yuzaki, Michisuke; Hamachi, Itaru

    2017-01-01

    The location and number of neurotransmitter receptors are dynamically regulated at postsynaptic sites. However, currently available methods for visualizing receptor trafficking require the introduction of genetically engineered receptors into neurons, which can disrupt the normal functioning and processing of the original receptor. Here we report a powerful method for visualizing native α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA)-type glutamate receptors (AMPARs) which are essential for cognitive functions without any genetic manipulation. This is based on a covalent chemical labelling strategy driven by selective ligand-protein recognition to tether small fluorophores to AMPARs using chemical AMPAR modification (CAM) reagents. The high penetrability of CAM reagents enables visualization of native AMPARs deep in brain tissues without affecting receptor function. Moreover, CAM reagents are used to characterize the diffusion dynamics of endogenous AMPARs in both cultured neurons and hippocampal slices. This method will help clarify the involvement of AMPAR trafficking in various neuropsychiatric and neurodevelopmental disorders. PMID:28387242

  20. Visual accumulation tube for size analysis of sands

    USGS Publications Warehouse

    Colby, B.C.; Christensen, R.P.

    1956-01-01

    The visual-accumulation-tube method was developed primarily for making size analyses of the sand fractions of suspended-sediment and bed-material samples. Because the fundamental property governing the motion of a sediment particle in a fluid is believed to be its fall velocity. the analysis is designed to determine the fall-velocity-frequency distribution of the individual particles of the sample. The analysis is based on a stratified sedimentation system in which the sample is introduced at the top of a transparent settling tube containing distilled water. The procedure involves the direct visual tracing of the height of sediment accumulation in a contracted section at the bottom of the tube. A pen records the height on a moving chart. The method is simple and fast, provides a continuous and permanent record, gives highly reproducible results, and accurately determines the fall-velocity characteristics of the sample. The apparatus, procedure, results, and accuracy of the visual-accumulation-tube method for determining the sedimentation-size distribution of sands are presented in this paper.

  1. LEA Detection and Tracking Method for Color-Independent Visual-MIMO

    PubMed Central

    Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo

    2016-01-01

    Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563

  2. LEA Detection and Tracking Method for Color-Independent Visual-MIMO.

    PubMed

    Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo

    2016-07-02

    Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.

  3. A Laboratory-Based Nonlinear Dynamics Course for Science and Engineering Students.

    ERIC Educational Resources Information Center

    Sungar, N.; Sharpe, J. P.; Moelter, M. J.; Fleishon, N.; Morrison, K.; McDill, J.; Schoonover, R.

    2001-01-01

    Describes the implementation of a new laboratory-based, interdisciplinary undergraduate course on linear dynamical systems. Focuses on geometrical methods and data visualization techniques. (Contains 20 references.) (Author/YDS)

  4. Combined visualization for noise mapping of industrial facilities based on ray-tracing and thin plate splines

    NASA Astrophysics Data System (ADS)

    Ovsiannikov, Mikhail; Ovsiannikov, Sergei

    2017-01-01

    The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.

  5. Spatiotemporal Visualization of Time-Series Satellite-Derived CO2 Flux Data Using Volume Rendering and Gpu-Based Interpolation on a Cloud-Driven Digital Earth

    NASA Astrophysics Data System (ADS)

    Wu, S.; Yan, Y.; Du, Z.; Zhang, F.; Liu, R.

    2017-10-01

    The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  6. A comparative study of multi-focus image fusion validation metrics

    NASA Astrophysics Data System (ADS)

    Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael

    2016-05-01

    Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).

  7. A foreground object features-based stereoscopic image visual comfort assessment model

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.

    2014-11-01

    Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.

  8. Create and Publish a Hierarchical Progressive Survey (HiPS)

    NASA Astrophysics Data System (ADS)

    Fernique, P.; Boch, T.; Pineau, F.; Oberto, A.

    2014-05-01

    Since 2009, the CDS promotes a method for visualizing based on the HEALPix sky tessellation. This method, called “Hierarchical Progressive Survey" or HiPS, allows one to display a survey progressively. It is particularly suited for all-sky surveys or deep fields. This visualization method is now integrated in several applications, notably Aladin, the SiTools/MIZAR CNES framework, and the recent HTML5 “Aladin Lite". Also, more than one hundred surveys are already available in this view mode. In this article, we will present the progress concerning this method and its recent adaptation to the astronomical catalogs such as the GAIA simulation.

  9. Automated visualization of rule-based models

    PubMed Central

    Tapia, Jose-Juan; Faeder, James R.

    2017-01-01

    Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models. PMID:29131816

  10. Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots.

    PubMed

    Chen, Jian; Jia, Bingxi; Zhang, Kaixiang

    2017-11-01

    In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.

  11. Typograph: Multiscale Spatial Exploration of Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Burtner, Edwin R.; Cramer, Nicholas O.

    2013-12-01

    Visualizing large document collections using a spatial layout of terms can enable quick overviews of information. However, these metaphors (e.g., word clouds, tag clouds, etc.) often lack interactivity to explore the information and the location and rendering of the terms are often not based on mathematical models that maintain relative distances from other information based on similarity metrics. Further, transitioning between levels of detail (i.e., from terms to full documents) can be challanging. In this paper, we present Typograph, a multi-scale spatial exploration visualization for large document collections. Based on the term-based visualization methods, Typograh enables multipel levels of detailmore » (terms, phrases, snippets, and full documents) within the single spatialization. Further, the information is placed based on their relative similarity to other information to create the “near = similar” geography metaphor. This paper discusses the design principles and functionality of Typograph and presents a use case analyzing Wikipedia to demonstrate usage.« less

  12. Comparison of amplitude-decorrelation, speckle-variance and phase-variance OCT angiography methods for imaging the human retina and choroid

    PubMed Central

    Gorczynska, Iwona; Migacz, Justin V.; Zawadzki, Robert J.; Capps, Arlie G.; Werner, John S.

    2016-01-01

    We compared the performance of three OCT angiography (OCTA) methods: speckle variance, amplitude decorrelation and phase variance for imaging of the human retina and choroid. Two averaging methods, split spectrum and volume averaging, were compared to assess the quality of the OCTA vascular images. All data were acquired using a swept-source OCT system at 1040 nm central wavelength, operating at 100,000 A-scans/s. We performed a quantitative comparison using a contrast-to-noise (CNR) metric to assess the capability of the three methods to visualize the choriocapillaris layer. For evaluation of the static tissue noise suppression in OCTA images we proposed to calculate CNR between the photoreceptor/RPE complex and the choriocapillaris layer. Finally, we demonstrated that implementation of intensity-based OCT imaging and OCT angiography methods allows for visualization of retinal and choroidal vascular layers known from anatomic studies in retinal preparations. OCT projection imaging of data flattened to selected retinal layers was implemented to visualize retinal and choroidal vasculature. User guided vessel tracing was applied to segment the retinal vasculature. The results were visualized in a form of a skeletonized 3D model. PMID:27231598

  13. Sexing sirenians: validation of visual and molecular sex determination in both wild dugongs (Dugong dugon) and Florida manatees (Trichechus manatus latirostris). Aquatic Mammals 35(2):187-192.

    USGS Publications Warehouse

    Bonde, Robert K.; Lanyon, J.; Sneath, H.; Ovenden, J.; Broderick, D.

    2009-01-01

    Sexing wild marine mammals that show little to no sexual dimorphism is challenging. For sirenians that are difficult to catch or approach closely, molecular sexing from tissue biopsies offers an alternative method to visual discrimination. This paper reports the results of a field study to validate the use of two sexing methods: (1) visual discrimination of sex vs (2) molecular sexing based on a multiplex PCR assay which amplifies the male-specific SRY gene and differentiates ZFX and ZFY gametologues. Skin samples from 628 dugongs (Dugong dugon) and 100 Florida manatees (Trichechus manatus latirostris) were analysed and assigned as male or female based on molecular sex. These individuals were also assigned a sex based on either direct observation of the genitalia and/or the association of the individual with a calf. Individuals of both species showed 93 to 96% congruence between visual and molecular sexing. For the remaining 4 to 7%, the discrepancies could be explained by human error. To mitigate this error rate, we recommend using both of these robust techniques, with routine inclusion of sex primers into microsatellite panels employed for identity, along with trained field observers and stringent sample handling.

  14. Visualizations of Travel Time Performance Based on Vehicle Reidentification Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Stanley Ernest; Sharifi, Elham; Day, Christopher M.

    This paper provides a visual reference of the breadth of arterial performance phenomena based on travel time measures obtained from reidentification technology that has proliferated in the past 5 years. These graphical performance measures are revealed through overlay charts and statistical distribution as revealed through cumulative frequency diagrams (CFDs). With overlays of vehicle travel times from multiple days, dominant traffic patterns over a 24-h period are reinforced and reveal the traffic behavior induced primarily by the operation of traffic control at signalized intersections. A cumulative distribution function in the statistical literature provides a method for comparing traffic patterns from variousmore » time frames or locations in a compact visual format that provides intuitive feedback on arterial performance. The CFD may be accumulated hourly, by peak periods, or by time periods specific to signal timing plans that are in effect. Combined, overlay charts and CFDs provide visual tools with which to assess the quality and consistency of traffic movement for various periods throughout the day efficiently, without sacrificing detail, which is a typical byproduct of numeric-based performance measures. These methods are particularly effective for comparing before-and-after median travel times, as well as changes in interquartile range, to assess travel time reliability.« less

  15. Visual Persons Behavior Diary Generation Model based on Trajectories and Pose Estimation

    NASA Astrophysics Data System (ADS)

    Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li

    2018-03-01

    The behavior pattern of persons was the important output of the surveillance analysis. This paper focus on the generation model of visual person behavior diary. The pipeline includes the person detection, tracking, and the person behavior classify. This paper adopts the deep convolutional neural model YOLO (You Only Look Once)V2 for person detection module. Multi person tracking was based on the detection framework. The Hungarian assignment algorithm was used to the matching. The person appearance model was integrated by HSV color model and Hash code model. The person object motion was estimated by the Kalman Filter. The multi objects were matching with exist tracklets through the appearance and motion location distance by the Hungarian assignment method. A long continuous trajectory for one person was get by the spatial-temporal continual linking algorithm. And the face recognition information was used to identify the trajectory. The trajectories with identification information can be used to generate the visual diary of person behavior based on the scene context information and person action estimation. The relevant modules are tested in public data sets and our own capture video sets. The test results show that the method can be used to generate the visual person behavior pattern diary with certain accuracy.

  16. User-centered evaluation of Arizona BioPathway: an information extraction, integration, and visualization system.

    PubMed

    Quiñones, Karin D; Su, Hua; Marshall, Byron; Eggers, Shauna; Chen, Hsinchun

    2007-09-01

    Explosive growth in biomedical research has made automated information extraction, knowledge integration, and visualization increasingly important and critically needed. The Arizona BioPathway (ABP) system extracts and displays biological regulatory pathway information from the abstracts of journal articles. This study uses relations extracted from more than 200 PubMed abstracts presented in a tabular and graphical user interface with built-in search and aggregation functionality. This paper presents a task-centered assessment of the usefulness and usability of the ABP system focusing on its relation aggregation and visualization functionalities. Results suggest that our graph-based visualization is more efficient in supporting pathway analysis tasks and is perceived as more useful and easier to use as compared to a text-based literature-viewing method. Relation aggregation significantly contributes to knowledge-acquisition efficiency. Together, the graphic and tabular views in the ABP Visualizer provide a flexible and effective interface for pathway relation browsing and analysis. Our study contributes to pathway-related research and biological information extraction by assessing the value of a multiview, relation-based interface that supports user-controlled exploration of pathway information across multiple granularities.

  17. A comparison of visual and quantitative methods to identify interstitial lung abnormalities.

    PubMed

    Kliment, Corrine R; Araki, Tetsuro; Doyle, Tracy J; Gao, Wei; Dupuis, Josée; Latourelle, Jeanne C; Zazueta, Oscar E; Fernandez, Isis E; Nishino, Mizuki; Okajima, Yuka; Ross, James C; Estépar, Raúl San José; Diaz, Alejandro A; Lederer, David J; Schwartz, David A; Silverman, Edwin K; Rosas, Ivan O; Washko, George R; O'Connor, George T; Hatabu, Hiroto; Hunninghake, Gary M

    2015-10-29

    Evidence suggests that individuals with interstitial lung abnormalities (ILA) on a chest computed tomogram (CT) may have an increased risk to develop a clinically significant interstitial lung disease (ILD). Although methods used to identify individuals with ILA on chest CT have included both automated quantitative and qualitative visual inspection methods, there has been not direct comparison between these two methods. To investigate this relationship, we created lung density metrics and compared these to visual assessments of ILA. To provide a comparison between ILA detection methods based on visual assessment we generated measures of high attenuation areas (HAAs, defined by attenuation values between -600 and -250 Hounsfield Units) in >4500 participants from both the COPDGene and Framingham Heart studies (FHS). Linear and logistic regressions were used for analyses. Increased measures of HAAs (in ≥ 10 % of the lung) were significantly associated with ILA defined by visual inspection in both cohorts (P < 0.0001); however, the positive predictive values were not very high (19 % in COPDGene and 13 % in the FHS). In COPDGene, the association between HAAs and ILA defined by visual assessment were modified by the percentage of emphysema and body mass index. Although increased HAAs were associated with reductions in total lung capacity in both cohorts, there was no evidence for an association between measurement of HAAs and MUC5B promoter genotype in the FHS. Our findings demonstrate that increased measures of lung density may be helpful in determining the severity of lung volume reduction, but alone, are not strongly predictive of ILA defined by visual assessment. Moreover, HAAs were not associated with MUC5B promoter genotype.

  18. Research on strategy marine noise map based on i4ocean platform: Constructing flow and key approach

    NASA Astrophysics Data System (ADS)

    Huang, Baoxiang; Chen, Ge; Han, Yong

    2016-02-01

    Noise level in a marine environment has raised extensive concern in the scientific community. The research is carried out on i4Ocean platform following the process of ocean noise model integrating, noise data extracting, processing, visualizing, and interpreting, ocean noise map constructing and publishing. For the convenience of numerical computation, based on the characteristics of ocean noise field, a hybrid model related to spatial locations is suggested in the propagation model. The normal mode method K/I model is used for far field and ray method CANARY model is used for near field. Visualizing marine ambient noise data is critical to understanding and predicting marine noise for relevant decision making. Marine noise map can be constructed on virtual ocean scene. The systematic marine noise visualization framework includes preprocessing, coordinate transformation interpolation, and rendering. The simulation of ocean noise depends on realistic surface. Then the dynamic water simulation gird was improved with GPU fusion to achieve seamless combination with the visualization result of ocean noise. At the same time, the profile and spherical visualization include space, and time dimensionality were also provided for the vertical field characteristics of ocean ambient noise. Finally, marine noise map can be published with grid pre-processing and multistage cache technology to better serve the public.

  19. The Chinese American Eye Study: Design and Methods

    PubMed Central

    Varma, Rohit; Hsu, Chunyi; Wang, Dandan; Torres, Mina; Azen, Stanley P.

    2016-01-01

    Purpose To summarize the study design, operational strategies and procedures of the Chinese American Eye Study (CHES), a population-based assessment of the prevalence of visual impairment, ocular disease, and visual functioning in Chinese Americans. Methods This population-based, cross-sectional study, included 4,570 Chinese, 50 years and older, residing in the city of Monterey Park, California. Each eligible participant completed a detailed interview and eye examination. The interview included an assessment of demographic, behavioral, and ocular risk factors and health-related and vision-related quality of life. The eye examination included measurements of visual acuity, intraocular pressure, visual fields, fundus and optic disc photography, a detailed anterior and posterior segment examination, and measurements of blood pressure, glycosylated hemoglobin levels, and blood glucose levels. Results The objectives of the CHES are to obtain prevalence estimates of visual impairment, refractive error, diabetic retinopathy, open-angle and angle-closure glaucoma, lens opacities, and age-related macular degeneration in Chinese-Americans. In addition, outcomes include effect estimates for risk factors associated with eye diseases. Lastly, CHES will investigate the genetic determinates of myopia and glaucoma. Conclusion The CHES will provide information about the prevalence and risk factors of ocular diseases in one of the fastest growing minority groups in the United States. PMID:24044409

  20. Comparison of usual and alternative methods to measure height in mechanically ventilated patients: potential impact on protective ventilation.

    PubMed

    Bojmehrani, Azadeh; Bergeron-Duchesne, Maude; Bouchard, Carmelle; Simard, Serge; Bouchard, Pierre-Alexandre; Vanderschuren, Abel; L'Her, Erwan; Lellouche, François

    2014-07-01

    Protective ventilation implementation requires the calculation of predicted body weight (PBW), determined by a formula based on gender and height. Consequently, height inaccuracy may be a limiting factor to correctly set tidal volumes. The objective of this study was to evaluate the accuracy of different methods in measuring heights in mechanically ventilated patients. Before cardiac surgery, actual height was measured with a height gauge while subjects were standing upright (reference method); the height was also estimated by alternative methods based on lower leg and forearm measurements. After cardiac surgery, upon ICU admission, a subject's height was visually estimated by a clinician and then measured with a tape measure while the subject was supine and undergoing mechanical ventilation. One hundred subjects (75 men, 25 women) were prospectively included. Mean PBW was 61.0 ± 9.7 kg, and mean actual weight was 30.3% higher. In comparison with the reference method, estimating the height visually and using the tape measure were less accurate than both lower leg and forearm measurements. Errors above 10% in calculating the PBW were present in 25 and 40 subjects when the tape measure or visual estimation of height was used in the formula, respectively. With lower leg and forearm measurements, 15 subjects had errors above 10% (P < .001). Our results demonstrate that significant variability exists between the different methods used to measure height in bedridden patients on mechanical ventilation. Alternative methods based on lower leg and forearm measurements are potentially interesting solutions to facilitate the accurate application of protective ventilation. Copyright © 2014 by Daedalus Enterprises.

  1. The seam visual tracking method for large structures

    NASA Astrophysics Data System (ADS)

    Bi, Qilin; Jiang, Xiaomin; Liu, Xiaoguang; Cheng, Taobo; Zhu, Yulong

    2017-10-01

    In this paper, a compact and flexible weld visual tracking method is proposed. Firstly, there was the interference between the visual device and the work-piece to be welded when visual tracking height cannot change. a kind of weld vision system with compact structure and tracking height is researched. Secondly, according to analyze the relative spatial pose between the camera, the laser and the work-piece to be welded and study with the theory of relative geometric imaging, The mathematical model between image feature parameters and three-dimensional trajectory of the assembly gap to be welded is established. Thirdly, the visual imaging parameters of line structured light are optimized by experiment of the weld structure of the weld. Fourth, the interference that line structure light will be scatters at the bright area of metal and the area of surface scratches will be bright is exited in the imaging. These disturbances seriously affect the computational efficiency. The algorithm based on the human eye visual attention mechanism is used to extract the weld characteristics efficiently and stably. Finally, in the experiment, It is verified that the compact and flexible weld tracking method has the tracking accuracy of 0.5mm in the tracking of large structural parts. It is a wide range of industrial application prospects.

  2. Smart-system of distance learning of visually impaired people based on approaches of artificial intelligence

    NASA Astrophysics Data System (ADS)

    Samigulina, Galina A.; Shayakhmetova, Assem S.

    2016-11-01

    Research objective is the creation of intellectual innovative technology and information Smart-system of distance learning for visually impaired people. The organization of the available environment for receiving quality education for visually impaired people, their social adaptation in society are important and topical issues of modern education.The proposed Smart-system of distance learning for visually impaired people can significantly improve the efficiency and quality of education of this category of people. The scientific novelty of proposed Smart-system is using intelligent and statistical methods of processing multi-dimensional data, and taking into account psycho-physiological characteristics of perception and awareness learning information by visually impaired people.

  3. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    DTIC Science & Technology

    2001-10-25

    Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for

  4. A brief review of other notable protein detection methods on acrylamide gels.

    PubMed

    Kurien, Biji T; Scofield, R Hal

    2012-01-01

    Several methods have been described to stain proteins analyzed on acrylamide gels. These include ultrasensitive protein detection in one-dimensional and two-dimensional gel electrophoresis using a fluorescent product from the fungus Epicoccum nigrum; a fluorescence-based Coomassie Blue protein staining; visualization of proteins in acrylamide gels using ultraviolet illumination; fluorescence visualization of proteins in sodium dodecyl sulfate-polyacrylamide gels using environmentally benign, nonfixative, saline solution; and increasing the sensitivity four- to sixfold for detecting trace proteins in dye or silver stained polyacrylamide gels using polyethylene glycol 6000. All these methods are reviewed briefly in this chapter.

  5. A fast non-contact imaging photoplethysmography method using a tissue-like model

    NASA Astrophysics Data System (ADS)

    McDuff, Daniel J.; Blackford, Ethan B.; Estepp, Justin R.; Nishidate, Izumi

    2018-02-01

    Imaging photoplethysmography (iPPG) allows non-contact, concomitant measurement and visualization of peripheral blood flow using just an RGB camera. Most iPPG methods require a window of temporal data and complex computation, this makes real-time measurement and spatial visualization impossible. We present a fast,"window-less", non-contact imaging photoplethysmography method, based on a tissue-like model of the skin, that allows accurate measurement of heart rate and heart rate variability parameters. The error in heart rate estimates is equivalent to state-of-the-art techniques and computation is much faster.

  6. Fluorescence Visual Detection of Herbal Product Substitutions at Terminal Herbal Markets by CCP-based FRET technique.

    PubMed

    Jiang, Chao; Yuan, Yuan; Yang, Guang; Jin, Yan; Liu, Libing; Zhao, Yuyang; Huang, Luqi

    2016-10-21

    Inaccurate labeling of materials used in herbal products may compromise the therapeutic efficacy and may pose a threat to medicinal safety. In this paper, a rapid (within 3 h), sensitive and visual colorimetric method for identifying substitutions in terminal market products was developed using cationic conjugated polymer-based fluorescence resonance energy transfer (CCP-based FRET). Chinese medicinal materials with similar morphology and chemical composition were clearly distinguished by the single-nucleotide polymorphism (SNP) genotyping method. Assays using CCP-based FRET technology showed a high frequency of adulterants in Lu-Rong (52.83%) and Chuan-Bei-Mu (67.8%) decoction pieces, and patented Chinese drugs (71.4%, 5/7) containing Chuan-Bei-Mu ingredients were detected in the terminal herbal market. In comparison with DNA sequencing, this protocol simplifies procedures by eliminating the cumbersome workups and sophisticated instruments, and only a trace amount of DNA is required. The CCP-based method is particularly attractive because it can detect adulterants in admixture samples with high sensitivity. Therefore, the CCP-based detection system shows great potential for routine terminal market checks and drug safety controls.

  7. Exploration of complex visual feature spaces for object perception

    PubMed Central

    Leeds, Daniel D.; Pyles, John A.; Tarr, Michael J.

    2014-01-01

    The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each unit's selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features associated with cortical object representation. PMID:25309408

  8. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    DOEpatents

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA

    2008-05-13

    A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  9. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    DOEpatents

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA

    2012-03-06

    A method of displaying correlations among information objects includes receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  10. A user-friendly SSVEP-based brain-computer interface using a time-domain classifier.

    PubMed

    Luo, An; Sullivan, Thomas J

    2010-04-01

    We introduce a user-friendly steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) system. Single-channel EEG is recorded using a low-noise dry electrode. Compared to traditional gel-based multi-sensor EEG systems, a dry sensor proves to be more convenient, comfortable and cost effective. A hardware system was built that displays four LED light panels flashing at different frequencies and synchronizes with EEG acquisition. The visual stimuli have been carefully designed such that potential risk to photosensitive people is minimized. We describe a novel stimulus-locked inter-trace correlation (SLIC) method for SSVEP classification using EEG time-locked to stimulus onsets. We studied how the performance of the algorithm is affected by different selection of parameters. Using the SLIC method, the average light detection rate is 75.8% with very low error rates (an 8.4% false positive rate and a 1.3% misclassification rate). Compared to a traditional frequency-domain-based method, the SLIC method is more robust (resulting in less annoyance to the users) and is also suitable for irregular stimulus patterns.

  11. The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures

    NASA Astrophysics Data System (ADS)

    Stephenson, W. Kirk

    2009-08-01

    A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes.

  12. A visual analysis of multi-attribute data using pixel matrix displays

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel; Schreck, Tobias

    2007-01-01

    Charts and tables are commonly used to visually analyze data. These graphics are simple and easy to understand, but charts show only highly aggregated data and present only a limited number of data values while tables often show too many data values. As a consequence, these graphics may either lose or obscure important information, so different techniques are required to monitor complex datasets. Users need more powerful visualization techniques to digest and compare detailed multi-attribute data to analyze the health of their business. This paper proposes an innovative solution based on the use of pixel-matrix displays to represent transaction-level information. With pixelmatrices, users can visualize areas of importance at a glance, a capability not provided by common charting techniques. We present our solutions to use colored pixel-matrices in (1) charts for visualizing data patterns and discovering exceptions, (2) tables for visualizing correlations and finding root-causes, and (3) time series for visualizing the evolution of long-running transactions. The solutions have been applied with success to product sales, Internet network performance analysis, and service contract applications demonstrating the benefits of our method over conventional graphics. The method is especially useful when detailed information is a key part of the analysis.

  13. Enviroplan—a summary methodology for comprehensive environmental planning and design

    Treesearch

    Robert Allen Jr.; George Nez; Fred Nicholson; Larry Sutphin

    1979-01-01

    This paper will discuss a comprehensive environmental assessment methodology that includes a numerical method for visual management and analysis. This methodology employs resource and human activity units as a means to produce a visual form unit which is the fundamental unit of the perceptual environment. The resource unit is based on the ecosystem as the fundamental...

  14. The Abacus: Instruction by Teachers of Students with Visual Impairments

    ERIC Educational Resources Information Center

    Amato, Sheila; Hong, Sunggye; Rosenblum, L. Penny

    2013-01-01

    Introduction: This article, based on a study of 196 teachers of students with visual impairments, reports on the experiences with and opinions related to their decisions about instructing their students who are blind or have low vision in the abacus. Methods: The participants completed an online survey on how they decide which students should be…

  15. Art Therapy: The Visual Spatial Factor and the Hippocampus. 2nd Revision

    ERIC Educational Resources Information Center

    Del Giacco, Maureen

    2010-01-01

    In this writing related to neuro-plasticity, we are shown that changes in the brain can occur with repeated use of sensory stimuli, with both visual and motor interventions. Keeping these important scientific contributions in mind, I will briefly summarize why the choice of the arts-based DAT method of psychotherapy over traditional verbally based…

  16. Role of Enhancing Visual Effects Education Delivery to Encounter Career Challenges in Malaysia

    ERIC Educational Resources Information Center

    Ng, Lynn-Sze

    2017-01-01

    Problem-based Learning (PBL) is one of the most effective methods of instruction that helps Visual Effects (VFX) students to be more adaptable at encountering career challenges in Malaysia. These challenges are; lack of several important requirements such as, the basic and fundamental knowledge of VFX concepts, the ability to understand real-world…

  17. Performance Measurement and Accommodation: Students with Visual Impairments on Pennsylvania's Alternate Assessment

    ERIC Educational Resources Information Center

    Zebehazy, Kim T.; Zigmond, Naomi; Zimmerman, George J.

    2012-01-01

    Introduction: This study investigated the use of accommodations and the performance of students with visual impairments and severe cognitive disabilities on the Pennsylvania Alternate System of Assessment (PASA)yCoan alternate performance-based assessment. Methods: Differences in test scores on the most basic level (level A) of the PASA of 286…

  18. Effects of Video-Feedback Interaction Training for Professional Caregivers of Children and Adults with Visual and Intellectual Disabilities

    ERIC Educational Resources Information Center

    Damen, S.; Kef, S.; Worm, M.; Janssen, M. J.; Schuengel, C.

    2011-01-01

    Background: Individuals in group homes may experience poor quality of social interaction with their professional caregivers, limiting their quality of life. The video-based Contact programme may help caregivers to improve their interaction with clients. Method: Seventy-two caregivers of 12 individuals with visual and intellectual disabilities…

  19. Visualization of volumetric seismic data

    NASA Astrophysics Data System (ADS)

    Spickermann, Dela; Böttinger, Michael; Ashfaq Ahmed, Khawar; Gajewski, Dirk

    2015-04-01

    Mostly driven by demands of high quality subsurface imaging, highly specialized tools and methods have been developed to support the processing, visualization and interpretation of seismic data. 3D seismic data acquisition and 4D time-lapse seismic monitoring are well-established techniques in academia and industry, producing large amounts of data to be processed, visualized and interpreted. In this context, interactive 3D visualization methods proved to be valuable for the analysis of 3D seismic data cubes - especially for sedimentary environments with continuous horizons. In crystalline and hard rock environments, where hydraulic stimulation techniques may be applied to produce geothermal energy, interpretation of the seismic data is a more challenging problem. Instead of continuous reflection horizons, the imaging targets are often steep dipping faults, causing a lot of diffractions. Without further preprocessing these geological structures are often hidden behind the noise in the data. In this PICO presentation we will present a workflow consisting of data processing steps, which enhance the signal-to-noise ratio, followed by a visualization step based on the use the commercially available general purpose 3D visualization system Avizo. Specifically, we have used Avizo Earth, an extension to Avizo, which supports the import of seismic data in SEG-Y format and offers easy access to state-of-the-art 3D visualization methods at interactive frame rates, even for large seismic data cubes. In seismic interpretation using visualization, interactivity is a key requirement for understanding complex 3D structures. In order to enable an easy communication of the insights gained during the interactive visualization process, animations of the visualized data were created which support the spatial understanding of the data.

  20. Local Homing Navigation Based on the Moment Model for Landmark Distribution and Features

    PubMed Central

    Lee, Changmin; Kim, DaeEun

    2017-01-01

    For local homing navigation, an agent is supposed to return home based on the surrounding environmental information. According to the snapshot model, the home snapshot and the current view are compared to determine the homing direction. In this paper, we propose a novel homing navigation method using the moment model. The suggested moment model also follows the snapshot theory to compare the home snapshot and the current view, but the moment model defines a moment of landmark inertia as the sum of the product of the feature of the landmark particle with the square of its distance. The method thus uses range values of landmarks in the surrounding view and the visual features. The center of the moment can be estimated as the reference point, which is the unique convergence point in the moment potential from any view. The homing vector can easily be extracted from the centers of the moment measured at the current position and the home location. The method effectively guides homing direction in real environments, as well as in the simulation environment. In this paper, we take a holistic approach to use all pixels in the panoramic image as landmarks and use the RGB color intensity for the visual features in the moment model in which a set of three moment functions is encoded to determine the homing vector. We also tested visual homing or the moment model with only visual features, but the suggested moment model with both the visual feature and the landmark distance shows superior performance. We demonstrate homing performance with various methods classified by the status of the feature, the distance and the coordinate alignment. PMID:29149043

  1. Quantifying and visualizing variations in sets of images using continuous linear optimal transport

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil; Rohde, Gustavo K.

    2014-03-01

    Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.

  2. Inclusive Competitive Game Play Through Balanced Sensory Feedback.

    PubMed

    Westin, Thomas; Söderström, David; Karlsson, Olov; Peiris, Ranil

    2017-01-01

    While game accessibility has improved significantly the last few years, there are still barriers for equal participation and multiplayer issues have been less researched. Game balance is here about making the game fair in a player versus player competitive game. One difficult design task is to balance the game to be fair regardless of visual or hearing capabilities, with clearly different requirements. This paper explores a tentative design method for enabling inclusive competitive game-play without individual adaptations of game rules that could spoil the game. The method involved applying a unified design method to design an unbalanced game, then modifying visual feedback as a hypothetical balanced design, and testing the game with totally 52 people with and without visual or hearing disabilities in three workshops. Game balance was evaluated based on score differences and less structured qualitative data, and a redesign of the game was made. Conclusions are a tentative method for balancing a multiplayer, competitive game without changing game rules and how the method can be applied.

  3. Imaging shear wave propagation for elastic measurement using OCT Doppler variance method

    NASA Astrophysics Data System (ADS)

    Zhu, Jiang; Miao, Yusi; Qu, Yueqiao; Ma, Teng; Li, Rui; Du, Yongzhao; Huang, Shenghai; Shung, K. Kirk; Zhou, Qifa; Chen, Zhongping

    2016-03-01

    In this study, we have developed an acoustic radiation force orthogonal excitation optical coherence elastography (ARFOE-OCE) method for the visualization of the shear wave and the calculation of the shear modulus based on the OCT Doppler variance method. The vibration perpendicular to the OCT detection direction is induced by the remote acoustic radiation force (ARF) and the shear wave propagating along the OCT beam is visualized by the OCT M-scan. The homogeneous agar phantom and two-layer agar phantom are measured using the ARFOE-OCE system. The results show that the ARFOE-OCE system has the ability to measure the shear modulus beyond the OCT imaging depth. The OCT Doppler variance method, instead of the OCT Doppler phase method, is used for vibration detection without the need of high phase stability and phase wrapping correction. An M-scan instead of the B-scan for the visualization of the shear wave also simplifies the data processing.

  4. Airplane detection based on fusion framework by combining saliency model with Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen

    2018-03-01

    Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Fangyan; Zhang, Song; Chung Wong, Pak

    Effectively visualizing large graphs and capturing the statistical properties are two challenging tasks. To aid in these two tasks, many sampling approaches for graph simplification have been proposed, falling into three categories: node sampling, edge sampling, and traversal-based sampling. It is still unknown which approach is the best. We evaluate commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates. We conduct our evaluation on three graph models: random graphs, small-world graphs, and scale-free graphs. Initial results indicate that the effectiveness of a sampling method is dependent on the graph model, themore » size of the graph, and the desired statistical property. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task, and the results presented can be incorporated into graph visualization and analysis tools.« less

  6. Changes of Visual Pathway and Brain Connectivity in Glaucoma: A Systematic Review

    PubMed Central

    Nuzzi, Raffaele; Dallorto, Laura; Rolle, Teresa

    2018-01-01

    Background: Glaucoma is a leading cause of irreversible blindness worldwide. The increasing interest in the involvement of the cortical visual pathway in glaucomatous patients is due to the implications in recent therapies, such as neuroprotection and neuroregeneration. Objective: In this review, we outline the current understanding of brain structural, functional, and metabolic changes detected with the modern techniques of neuroimaging in glaucomatous subjects. Methods: We screened MEDLINE, EMBASE, CINAHL, CENTRAL, LILACS, Trip Database, and NICE for original contributions published until 31 October 2017. Studies with at least six patients affected by any type of glaucoma were considered. We included studies using the following neuroimaging techniques: functional Magnetic Resonance Imaging (fMRI), resting-state fMRI (rs-fMRI), magnetic resonance spectroscopy (MRS), voxel- based Morphometry (VBM), surface-based Morphometry (SBM), diffusion tensor MRI (DTI). Results: Over a total of 1,901 studies, 56 case series with a total of 2,381 patients were included. Evidence of neurodegenerative process in glaucomatous patients was found both within and beyond the visual system. Structural alterations in visual cortex (mainly reduced cortex thickness and volume) have been demonstrated with SBM and VBM; these changes were not limited to primary visual cortex but also involved association visual areas. Other brain regions, associated with visual function, demonstrated a certain grade of increased or decreased gray matter volume. Functional and metabolic abnormalities resulted within primary visual cortex in all studies with fMRI and MRS. Studies with rs-fMRI found disrupted connectivity between the primary and higher visual cortex and between visual cortex and associative visual areas in the task-free state of glaucomatous patients. Conclusions: This review contributes to the better understanding of brain abnormalities in glaucoma. It may stimulate further speculation about brain plasticity at a later age and therapeutic strategies, such as the prevention of cortical degeneration in patients with glaucoma. Structural, functional, and metabolic neuroimaging methods provided evidence of changes throughout the visual pathway in glaucomatous patients. Other brain areas, not directly involved in the processing of visual information, also showed alterations. PMID:29896087

  7. A visual tracking method based on improved online multiple instance learning

    NASA Astrophysics Data System (ADS)

    He, Xianhui; Wei, Yuxing

    2016-09-01

    Visual tracking is an active research topic in the field of computer vision and has been well studied in the last decades. The method based on multiple instance learning (MIL) was recently introduced into the tracking task, which can solve the problem that template drift well. However, MIL method has relatively poor performance in running efficiency and accuracy, due to its strong classifiers updating strategy is complicated, and the speed of the classifiers update is not always same with the change of the targets' appearance. In this paper, we present a novel online effective MIL (EMIL) tracker. A new update strategy for strong classifier was proposed to improve the running efficiency of MIL method. In addition, to improve the t racking accuracy and stability of the MIL method, a new dynamic mechanism for learning rate renewal of the classifier and variable search window were proposed. Experimental results show that our method performs good performance under the complex scenes, with strong stability and high efficiency.

  8. Nocturnal Visual Orientation in Flying Insects: A Benchmark for the Design of Vision-Based Sensors in Micro-Aerial Vehicles

    DTIC Science & Technology

    2011-03-09

    anu.edu.au Nocturnal visual orientation in flying insects: a benchmark for the design of vision-based sensors in Micro-Aerial Vehicles Report...9 10 Technical horizon sensors Over the past few years, a remarkable proliferation of designs for micro-aerial vehicles (MAVs) has occurred...possible elevations, it may severely degrade the performance of sensors by local saturation. Therefore it is necessary to find a method whereby the effect

  9. Imaging Stem Cells Implanted in Infarcted Myocardium

    PubMed Central

    Zhou, Rong; Acton, Paul D.; Ferrari, Victor A.

    2008-01-01

    Stem cell–based cellular cardiomyoplasty represents a promising therapy for myocardial infarction. Noninvasive imaging techniques would allow the evaluation of survival, migration, and differentiation status of implanted stem cells in the same subject over time. This review describes methods for cell visualization using several corresponding noninvasive imaging modalities, including magnetic resonance imaging, positron emission tomography, single-photon emission computed tomography, and bioluminescent imaging. Reporter-based cell visualization is compared with direct cell labeling for short- and long-term cell tracking. PMID:17112999

  10. Application of Visual Attention in Seismic Attribute Analysis

    NASA Astrophysics Data System (ADS)

    He, M.; Gu, H.; Wang, F.

    2016-12-01

    It has been proved that seismic attributes can be used to predict reservoir. The joint of multi-attribute and geological statistics, data mining, artificial intelligence, further promote the development of the seismic attribute analysis. However, the existing methods tend to have multiple solutions and insufficient generalization ability, which is mainly due to the complex relationship between seismic data and geological information, and undoubtedly own partly to the methods applied. Visual attention is a mechanism model of the human visual system which can concentrate on a few significant visual objects rapidly, even in a mixed scene. Actually, the model qualify good ability of target detection and recognition. In our study, the targets to be predicted are treated as visual objects, and an object representation based on well data is made in the attribute dimensions. Then in the same attribute space, the representation is served as a criterion to search the potential targets outside the wells. This method need not predict properties by building up a complicated relation between attributes and reservoir properties, but with reference to the standard determined before. So it has pretty good generalization ability, and the problem of multiple solutions can be weakened by defining the threshold of similarity.

  11. Visualization of simulated urban spaces: inferring parameterized generation of streets, parcels, and aerial imagery.

    PubMed

    Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich; Waddell, Paul

    2009-01-01

    Urban simulation models and their visualization are used to help regional planning agencies evaluate alternative transportation investments, land use regulations, and environmental protection policies. Typical urban simulations provide spatially distributed data about number of inhabitants, land prices, traffic, and other variables. In this article, we build on a synergy of urban simulation, urban visualization, and computer graphics to automatically infer an urban layout for any time step of the simulation sequence. In addition to standard visualization tools, our method gathers data of the original street network, parcels, and aerial imagery and uses the available simulation results to infer changes to the original urban layout and produce a new and plausible layout for the simulation results. In contrast with previous work, our approach automatically updates the layout based on changes in the simulation data and thus can scale to a large simulation over many years. The method in this article offers a substantial step forward in building integrated visualization and behavioral simulation systems for use in community visioning, planning, and policy analysis. We demonstrate our method on several real cases using a 200 GB database for a 16,300 km2 area surrounding Seattle.

  12. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  13. A New MI-Based Visualization Aided Validation Index for Mining Big Longitudinal Web Trial Data

    PubMed Central

    Zhang, Zhaoyang; Fang, Hua; Wang, Honggang

    2016-01-01

    Web-delivered clinical trials generate big complex data. To help untangle the heterogeneity of treatment effects, unsupervised learning methods have been widely applied. However, identifying valid patterns is a priority but challenging issue for these methods. This paper, built upon our previous research on multiple imputation (MI)-based fuzzy clustering and validation, proposes a new MI-based Visualization-aided validation index (MIVOOS) to determine the optimal number of clusters for big incomplete longitudinal Web-trial data with inflated zeros. Different from a recently developed fuzzy clustering validation index, MIVOOS uses a more suitable overlap and separation measures for Web-trial data but does not depend on the choice of fuzzifiers as the widely used Xie and Beni (XB) index. Through optimizing the view angles of 3-D projections using Sammon mapping, the optimal 2-D projection-guided MIVOOS is obtained to better visualize and verify the patterns in conjunction with trajectory patterns. Compared with XB and VOS, our newly proposed MIVOOS shows its robustness in validating big Web-trial data under different missing data mechanisms using real and simulated Web-trial data. PMID:27482473

  14. Complex sparse spatial filter for decoding mixed frequency and phase coded steady-state visually evoked potentials.

    PubMed

    Morikawa, Naoki; Tanaka, Toshihisa; Islam, Md Rabiul

    2018-07-01

    Mixed frequency and phase coding (FPC) can achieve the significant increase of the number of commands in steady-state visual evoked potential-based brain-computer interface (SSVEP-BCI). However, the inconsistent phases of the SSVEP over channels in a trial and the existence of non-contributing channels due to noise effects can decrease accurate detection of stimulus frequency. We propose a novel command detection method based on a complex sparse spatial filter (CSSF) by solving ℓ 1 - and ℓ 2,1 -regularization problems for a mixed-coded SSVEP-BCI. In particular, ℓ 2,1 -regularization (aka group sparsification) can lead to the rejection of electrodes that are not contributing to the SSVEP detection. A calibration data based canonical correlation analysis (CCA) and CSSF with ℓ 1 - and ℓ 2,1 -regularization cases were demonstrated for a 16-target stimuli with eleven subjects. The results of statistical test suggest that the proposed method with ℓ 1 - and ℓ 2,1 -regularization significantly achieved the highest ITR. The proposed approaches do not need any reference signals, automatically select prominent channels, and reduce the computational cost compared to the other mixed frequency-phase coding (FPC)-based BCIs. The experimental results suggested that the proposed method can be usable implementing BCI effectively with reduce visual fatigue. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. PERSPECTIVE: Is acuity enough? Other considerations in clinical investigations of visual prostheses

    NASA Astrophysics Data System (ADS)

    Lepri, Bernard P.

    2009-06-01

    Visual impairing eye diseases are the major frontier facing ophthalmic research today in light of our rapidly aging population. The visual skills necessary for improving the quality of daily function and life are inextricably linked to these impairing diseases. Both research and reimbursement programs are emphasizing outcome-based results. Is improvement in visual acuity alone enough to improve the function and quality of life of visually impaired persons? This perspective summarizes the types of effectiveness endpoints for clinical investigations of visual prostheses that go beyond visual acuity. The clinical investigation of visual prostheses should include visual function, functional vision and quality of life measures. Specifically, they encompass contrast sensitivity, orientation and mobility, activities of daily living and quality of life assessments. The perspective focuses on the design of clinical trials for visual prostheses and the methods of determining effectiveness above and beyond visual acuity that will yield outcomes that are measured by improved function in the visual world and quality of life. The visually impaired population is the primary consideration in this presentation with particular emphases on retinitis pigmentosa and age-related macular degeneration. Clinical trials for visual prostheses cannot be isolated from the need for medical rehabilitation in order to obtain measurements of effectiveness that produce outcomes/evidence-based success. This approach will facilitate improvement in daily function and quality of life of patients with diseases that cause chronic vision impairment. The views and opinions are those of the author and do not necessarily reflect those of the US Food and Drug Administration, the US Department of Health and Human Services or the Public Health Service.

  16. Probabilistic Modeling and Visualization of the Flexibility in Morphable Models

    NASA Astrophysics Data System (ADS)

    Lüthi, M.; Albrecht, T.; Vetter, T.

    Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.

  17. Short communication: Semiquantitative assessment of 99mTc-EDDA/HYNIC-TOC scintigraphy in differentiation of solitary pulmonary nodules--a complementary role to visual analysis.

    PubMed

    Płachcińska, Anna; Mikołajczak, Renata; Kozak, Józef; Rzeszutek, Katarzyna; Kuśmierek, Jacek

    2006-02-01

    The aim of this study was the assessment of a value of a semiquantitative analysis of scintigrams obtained with (99m)Tc-EDDA/HYNIC-TOC as a radiopharmaceutical (RPH) in differential diagnosis of solitary pulmonary nodules (SPNs), as a method complementary to visual evaluation of scintigrams. Scintigraphic images of 59 patients (33 males and 26 females between 34 and 78 years of age, mean value, 57) with SPN on chest radiographs (39 malignant and 20 benign) were retrospectively assessed semiquantitatively. Visual scintigram analysis was made earlier, prospectively. Nodule diameters ranged from 1 to 4 (mean 2.2) cm. A single photon emission computed tomography (SPECT) acquisition was performed at 2-4 hours after administration of 740 to 925 MBq of a RPH. Verification of scintigraphic results was based on a pathological examination of tumor samples (histopathology or cytology) and, in some cases, on bacteriological studies. As an additional criterion for tumor benignity, its stable size in a time interval not shorter than 3 years was accepted. A simple, semiquantitative method for assessment of radiopharmaceutical uptake in SPNs was used, based on "count sample" taken from tumor center (T) in relation to radiopharmaceutical concentration in the background (B) measured in the contralateral lung. A criterion for optimal differentiation between malignant and benign nodules (T/B ratio threshold value) was introduced, based on a receiver operating characteristic (ROC) curve. Additionally, a value of T/B ratio was searched for, excluding tumor benignity with high probability. Visual analysis of scintigrams revealed enhanced uptake of RPH at 36 of 39 (92%) sites, corresponding to locations of malignant nodules (including 34 of 35-97% cases of lung cancer). In 13 of 20 (65%) benign nodules, true negative results were obtained. Accuracy of the method equalled 83%. Optimal differentiation between malignant and benign nodules was found for a value of a T/B ratio amounting to 2. The semiquantitative method gave true positive results in 35 of 39 (90%) malignant nodules (as in visual method in 34 of 35-97% cases of lung cancer). True negative results were obtained in 17 of 20 (85%) benign cases. Accuracy of the method reached 88%. A T/B ratio exceeding 4 excluded tumor benignity with high probability. A simple method, of quantitatively assessing 99mTc-EDDA/HYNICTOC uptake in solitary pulmonary nodules by means of a T/B ratio can play a role that is complementary to the visual evaluation of scintigrams. It improves low specificity of the method in the detection of malignant nodules, without significant reduction of its sensitivity and provides a T/B ratio excluding tumor benignity with high probability.

  18. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Data Analysis and Visualization; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii)more » evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.« less

  19. Is the Charcot and Bernard case (1883) of loss of visual imagery really based on neurological impairment?

    PubMed

    Zago, Stefano; Allegri, Nicola; Cristoffanini, Marta; Ferrucci, Roberta; Porta, Mauro; Priori, Alberto

    2011-11-01

    INTRODUCTION. The Charcot and Bernard case of visual imagery, Monsieur X, is a classic case in the history of neuropsychology. Published in 1883, it has been considered the first case of visual imagery loss due to brain injury. Also in recent times a neurological valence has been given to it. However, the presence of analogous cases of loss of visual imagery in the psychiatric field have led us to hypothesise functional origins rather than organic. METHODS. In order to assess the validity of such an inference, we have compared the symptomatology of Monsieur X with that found in cases of loss of visual mental images, both psychiatric and neurological, presented in literature. RESULTS. The clinical findings show strong assonances of the Monsieur X case with the symptoms manifested over time by the patients with functionally based loss of visual imagery. CONCLUSION. Although Monsieur X's damage was initially interpreted as neurological, reports of similar symptoms in the psychiatric field lead us to postulate a functional cause for his impairment as well.

  20. Evaluation of two methods for quantifying passeriform lice

    PubMed Central

    Koop, Jennifer A. H.; Clayton, Dale H.

    2013-01-01

    Two methods commonly used to quantify ectoparasites on live birds are visual examination and dust-ruffling. Visual examination provides an estimate of ectoparasite abundance based on an observer’s timed inspection of various body regions on a bird. Dust-ruffling involves application of insecticidal powder to feathers that are then ruffled to dislodge ectoparasites onto a collection surface where they can then be counted. Despite the common use of these methods in the field, the proportion of actual ectoparasites they account for has only been tested with Rock Pigeons (Columba livia), a relatively large-bodied species (238–302 g) with dense plumage. We tested the accuracy of the two methods using European Starlings (Sturnus vulgaris; ~75 g). We first quantified the number of lice (Brueelia nebulosa) on starlings using visual examination, followed immediately by dust-ruffling. Birds were then euthanized and the proportion of lice accounted for by each method was compared to the total number of lice on each bird as determined with a body-washing method. Visual examination and dust-ruffling each accounted for a relatively small proportion of total lice (14% and 16%, respectively), but both were still significant predictors of abundance. The number of lice observed by visual examination accounted for 68% of the variation in total abundance. Similarly, the number of lice recovered by dust-ruffling accounted for 72% of the variation in total abundance. Our results show that both methods can be used to reliably quantify the abundance of lice on European Starlings and other similar-sized passerines. PMID:24039328

  1. Electron microscopic visualization of complementary labeled DNA with platinum-containing guanine derivative.

    PubMed

    Loukanov, Alexandre; Filipov, Chavdar; Mladenova, Polina; Toshev, Svetlin; Emin, Saim

    2016-04-01

    The object of the present report is to provide a method for a visualization of DNA in TEM by complementary labeling of cytosine with guanine derivative, which contains platinum as contrast-enhanced heavy element. The stretched single-chain DNA was obtained by modifying double-stranded DNA. The labeling method comprises the following steps: (i) stretching and adsorption of DNA on the support film of an electron microscope grid (the hydrophobic carbon film holding negative charged DNA); (ii) complementary labeling of the cytosine bases from the stretched single-stranded DNA pieces on the support film with platinum containing guanine derivative to form base-specific hydrogen bond; and (iii) producing a magnified image of the base-specific labeled DNA. Stretched single-stranded DNA on a support film is obtained by a rapid elongation of DNA pieces on the surface between air and aqueous buffer solution. The attached platinum-containing guanine derivative serves as a high-dense marker and it can be discriminated from the surrounding background of support carbon film and visualized by use of conventional TEM observation at 100 kV accelerated voltage. This method allows examination of specific nucleic macromolecules through atom-by-atom analysis and it is promising way toward future DNA-sequencing or molecular diagnostics of nucleic acids by electron microscopic observation. © 2016 Wiley Periodicals, Inc.

  2. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    NASA Astrophysics Data System (ADS)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  3. Hypothesis exploration with visualization of variance

    PubMed Central

    2014-01-01

    Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666

  4. GPU-based efficient realistic techniques for bleeding and smoke generation in surgical simulators.

    PubMed

    Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu

    2010-12-01

    In actual surgery, smoke and bleeding due to cauterization processes provide important visual cues to the surgeon, which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated the effects of bleeding and smoke generation, they are not realistic due to the requirement of real-time performance. To be interactive, visual update must be performed at at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques, since other computationally intensive processes compete for the available Central Processing Unit (CPU) resources. In this study we developed a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators, which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. The smoke and bleeding simulation were implemented as part of a laparoscopic adjustable gastric banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur noticeable overhead. However, for smoke generation, an input/output (I/O) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited to VR-based surgical simulators. Copyright © 2010 John Wiley & Sons, Ltd.

  5. Research ethics and the use of visual images in research with people with intellectual disability.

    PubMed

    Boxall, Kathy; Ralph, Sue

    2009-03-01

    The aim of this paper is to encourage debate about the use of creative visual approaches in intellectual disability research and discussion about Internet publication of photographs. Image-based research with people with intellectual disability is explored within the contexts of tighter ethical regulation of social research, increased interest in the use of visual methodologies, and rapid escalation in the numbers of digital images posted on the World Wide Web. Concern is raised about the possibility that tighter ethical regulation of social research, combined with the multitude of ethical issues raised by the use of image-based approaches may be discouraging the use of creative visual approaches in intellectual disability research. Inclusion in research through the use of accessible research methods is also an ethical issue, particularly in relation to those people who have hitherto been underrepresented in research. Visual approaches which have the potential to include people with profound and multiple intellectual disabilities are also discussed.

  6. Using component technologies for web based wavelet enhanced mammographic image visualization.

    PubMed

    Sakellaropoulos, P; Costaridou, L; Panayiotakis, G

    2000-01-01

    The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.

  7. Denoising imaging polarimetry by adapted BM3D method.

    PubMed

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  8. Spatial Visualization Learning in Engineering: Traditional Methods vs. a Web-Based Tool

    ERIC Educational Resources Information Center

    Pedrosa, Carlos Melgosa; Barbero, Basilio Ramos; Miguel, Arturo Román

    2014-01-01

    This study compares an interactive learning manager for graphic engineering to develop spatial vision (ILMAGE_SV) to traditional methods. ILMAGE_SV is an asynchronous web-based learning tool that allows the manipulation of objects with a 3D viewer, self-evaluation, and continuous assessment. In addition, student learning may be monitored, which…

  9. Visualizing the deep end of sound: plotting multi-parameter results from infrasound data analysis

    NASA Astrophysics Data System (ADS)

    Perttu, A. B.; Taisne, B.

    2016-12-01

    Infrasound is sound below the threshold of human hearing: approximately 20 Hz. The field of infrasound research, like other waveform based fields relies on several standard processing methods and data visualizations, including waveform plots and spectrograms. The installation of the International Monitoring System (IMS) global network of infrasound arrays, contributed to the resurgence of infrasound research. Array processing is an important method used in infrasound research, however, this method produces data sets with a large number of parameters, and requires innovative plotting techniques. The goal in designing new figures is to be able to present easily comprehendible, and information-rich plots by careful selection of data density and plotting methods.

  10. Development of a speech autocuer

    NASA Astrophysics Data System (ADS)

    Bedles, R. L.; Kizakvich, P. N.; Lawson, D. T.; McCartney, M. L.

    1980-12-01

    A wearable, visually based prosthesis for the deaf based upon the proven method for removing lipreading ambiguity known as cued speech was fabricated and tested. Both software and hardware developments are described, including a microcomputer, display, and speech preprocessor.

  11. Development of a speech autocuer

    NASA Technical Reports Server (NTRS)

    Bedles, R. L.; Kizakvich, P. N.; Lawson, D. T.; Mccartney, M. L.

    1980-01-01

    A wearable, visually based prosthesis for the deaf based upon the proven method for removing lipreading ambiguity known as cued speech was fabricated and tested. Both software and hardware developments are described, including a microcomputer, display, and speech preprocessor.

  12. The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures

    ERIC Educational Resources Information Center

    Stephenson, W. Kirk

    2009-01-01

    A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes. (Contains 4 notes.)

  13. Results of a National Survey of Biochemistry Instructors to Determine the Prevalence and Types of Representations Used during Instruction and Assessment

    ERIC Educational Resources Information Center

    Linenberger, Kimberly J.; Holme, Thomas A.

    2014-01-01

    Chemists and chemistry educators have long sought meaningful ways to visualize fundamentally abstract components, such as atoms and molecules, of their trade. As technology has improved, computer-based visualization methods have infused both research and education in chemistry. Biochemistry, in particular, has become highly dependent on ways that…

  14. A Field Study of a Standardized Tangible Symbol System for Learners Who Are Visually Impaired and Have Multiple Disabilities

    ERIC Educational Resources Information Center

    Trief, Ellen; Cascella, Paul W.; Bruce, Susan M.

    2013-01-01

    Introduction: The study reported in this article tracked the learning rate of 43 children with multiple disabilities and visual impairments who had limited to no verbal language across seven months of classroom-based intervention using a standardized set of tangible symbols. Methods: The participants were introduced to tangible symbols on a daily…

  15. Three Approaches to Teaching Art Methods Courses: Child Art, Visual Culture, and Issues-Based Art Education

    ERIC Educational Resources Information Center

    Chang, EunJung; Lim, Maria; Kim, Minam

    2012-01-01

    In this article, three art educators reflect on their ideas and experiences in developing and implementing innovative projects for their courses focusing on art for elementary education majors. They explore three different approaches. The three areas that are discussed in depth include: (1) understanding child art; (2) visual culture; and (3)…

  16. Basic to Applied Research: The Benefits of Audio-Visual Speech Perception Research in Teaching Foreign Languages

    ERIC Educational Resources Information Center

    Erdener, Dogu

    2016-01-01

    Traditionally, second language (L2) instruction has emphasised auditory-based instruction methods. However, this approach is restrictive in the sense that speech perception by humans is not just an auditory phenomenon but a multimodal one, and specifically, a visual one as well. In the past decade, experimental studies have shown that the…

  17. Ability or Access-Ability: Differential Item Functioning of Items on Alternate Performance-Based Assessment Tests for Students with Visual Impairments

    ERIC Educational Resources Information Center

    Zebehazy, Kim T.; Zigmond, Naomi; Zimmerman, George J.

    2012-01-01

    Introduction: This study investigated differential item functioning (DIF) of test items on Pennsylvania's Alternate System of Assessment (PASA) for students with visual impairments and severe cognitive disabilities and what the reasons for the differences may be. Methods: The Wilcoxon signed ranks test was used to analyze differences in the scores…

  18. Visual feature discrimination versus compression ratio for polygonal shape descriptors

    NASA Astrophysics Data System (ADS)

    Heuer, Joerg; Sanahuja, Francesc; Kaup, Andre

    2000-10-01

    In the last decade several methods for low level indexing of visual features appeared. Most often these were evaluated with respect to their discrimination power using measures like precision and recall. Accordingly, the targeted application was indexing of visual data within databases. During the standardization process of MPEG-7 the view on indexing of visual data changed, taking also communication aspects into account where coding efficiency is important. Even if the descriptors used for indexing are small compared to the size of images, it is recognized that there can be several descriptors linked to an image, characterizing different features and regions. Beside the importance of a small memory footprint for the transmission of the descriptor and the memory footprint in a database, eventually the search and filtering can be sped up by reducing the dimensionality of the descriptor if the metric of the matching can be adjusted. Based on a polygon shape descriptor presented for MPEG-7 this paper compares the discrimination power versus memory consumption of the descriptor. Different methods based on quantization are presented and their effect on the retrieval performance are measured. Finally an optimized computation of the descriptor is presented.

  19. Design and implementation of information visualization system on science and technology industry based on GIS

    NASA Astrophysics Data System (ADS)

    Wu, Xiaofang; Jiang, Liushi

    2011-02-01

    Usually in the traditional science and technology information system, the only text and table form are used to manage the data, and the mathematic statistics method is applied to analyze the data. It lacks for the spatial analysis and management of data. Therefore, GIS technology is introduced to visualize and analyze the information data on science and technology industry. Firstly, by using the developed platform-microsoft visual studio 2005 and ArcGIS Engine, the information visualization system on science and technology industry based on GIS is built up, which implements various functions, such as data storage and management, inquiry, statistics, chart analysis, thematic map representation. It can show the change of science and technology information from the space and time axis intuitively. Then, the data of science and technology in Guangdong province are taken as experimental data and are applied to the system. And by considering the factors of humanities, geography and economics so on, the situation and change tendency of science and technology information of different regions are analyzed and researched, and the corresponding suggestion and method are brought forward in order to provide the auxiliary support for development of science and technology industry in Guangdong province.

  20. Gps-Denied Geo-Localisation Using Visual Odometry

    NASA Astrophysics Data System (ADS)

    Gupta, Ashish; Chang, Huan; Yilmaz, Alper

    2016-06-01

    The primary method for geo-localization is based on GPS which has issues of localization accuracy, power consumption, and unavailability. This paper proposes a novel approach to geo-localization in a GPS-denied environment for a mobile platform. Our approach has two principal components: public domain transport network data available in GIS databases or OpenStreetMap; and a trajectory of a mobile platform. This trajectory is estimated using visual odometry and 3D view geometry. The transport map information is abstracted as a graph data structure, where various types of roads are modelled as graph edges and typically intersections are modelled as graph nodes. A search for the trajectory in real time in the graph yields the geo-location of the mobile platform. Our approach uses a simple visual sensor and it has a low memory and computational footprint. In this paper, we demonstrate our method for trajectory estimation and provide examples of geolocalization using public-domain map data. With the rapid proliferation of visual sensors as part of automated driving technology and continuous growth in public domain map data, our approach has the potential to completely augment, or even supplant, GPS based navigation since it functions in all environments.

  1. The Eye Phone Study: reliability and accuracy of assessing Snellen visual acuity using smartphone technology

    PubMed Central

    Perera, C; Chakrabarti, R; Islam, F M A; Crowston, J

    2015-01-01

    Purpose Smartphone-based Snellen visual acuity charts has become popularized; however, their accuracy has not been established. This study aimed to evaluate the equivalence of a smartphone-based visual acuity chart with a standard 6-m Snellen visual acuity (6SVA) chart. Methods First, a review of available Snellen chart applications on iPhone was performed to determine the most accurate application based on optotype size. Subsequently, a prospective comparative study was performed by measuring conventional 6SVA and then iPhone visual acuity using the ‘Snellen' application on an Apple iPhone 4. Results Eleven applications were identified, with accuracy of optotype size ranging from 4.4–39.9%. Eighty-eight patients from general medical and surgical wards in a tertiary hospital took part in the second part of the study. The mean difference in logMAR visual acuity between the two charts was 0.02 logMAR (95% limit of agreement −0.332, 0.372 logMAR). The largest mean difference in logMAR acuity was noted in the subgroup of patients with 6SVA worse than 6/18 (n=5), who had a mean difference of two Snellen visual acuity lines between the charts (0.276 logMAR). Conclusion We did not identify a Snellen visual acuity app at the time of study, which could predict a patients standard Snellen visual acuity within one line. There was considerable variability in the optotype accuracy of apps. Further validation is required for assessment of acuity in patients with severe vision impairment. PMID:25931170

  2. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    NASA Astrophysics Data System (ADS)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool. Currently STRING can generate animations of single 2D cuts, either planar or curved surfaces, through 3D simulation domains. To provide a general tool for experts enabling also direct exploration and analysis of large 3D flow fields the software needs to be extended to intuitive as well as interactive visualizations of entire 3D flow domains. The current research concerning this project, which is funded by the Federal Ministry for Economic Affairs and Energy (Germany), is presented.

  3. Automatic segmentation of the lateral geniculate nucleus: Application to control and glaucoma patients.

    PubMed

    Wang, Jieqiong; Miao, Wen; Li, Jing; Li, Meng; Zhen, Zonglei; Sabel, Bernhard; Xian, Junfang; He, Huiguang

    2015-11-30

    The lateral geniculate nucleus (LGN) is a key relay center of the visual system. Because the LGN morphology is affected by different diseases, it is of interest to analyze its morphology by segmentation. However, existing LGN segmentation methods are non-automatic, inefficient and prone to experimenters' bias. To address these problems, we proposed an automatic LGN segmentation algorithm based on T1-weighted imaging. First, the prior information of LGN was used to create a prior mask. Then region growing was applied to delineate LGN. We evaluated this automatic LGN segmentation method by (1) comparison with manually segmented LGN, (2) anatomically locating LGN in the visual system via LGN-based tractography, (3) application to control and glaucoma patients. The similarity coefficients of automatic segmented LGN and manually segmented one are 0.72 (0.06) for the left LGN and 0.77 (0.07) for the right LGN. LGN-based tractography shows the subcortical pathway seeding from LGN passes the optic tract and also reaches V1 through the optic radiation, which is consistent with the LGN location in the visual system. In addition, LGN asymmetry as well as LGN atrophy along with age is observed in normal controls. The investigation of glaucoma effects on LGN volumes demonstrates that the bilateral LGN volumes shrink in patients. The automatic LGN segmentation is objective, efficient, valid and applicable. Experiment results proved the validity and applicability of the algorithm. Our method will speed up the research on visual system and greatly enhance studies of different vision-related diseases. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Analysis, Mining and Visualization Service at NCSA

    NASA Astrophysics Data System (ADS)

    Wilhelmson, R.; Cox, D.; Welge, M.

    2004-12-01

    NCSA's goal is to create a balanced system that fully supports high-end computing as well as: 1) high-end data management and analysis; 2) visualization of massive, highly complex data collections; 3) large databases; 4) geographically distributed Grid computing; and 5) collaboratories, all based on a secure computational environment and driven with workflow-based services. To this end NCSA has defined a new technology path that includes the integration and provision of cyberservices in support of data analysis, mining, and visualization. NCSA has begun to develop and apply a data mining system-NCSA Data-to-Knowledge (D2K)-in conjunction with both the application and research communities. NCSA D2K will enable the formation of model-based application workflows and visual programming interfaces for rapid data analysis. The Java-based D2K framework, which integrates analytical data mining methods with data management, data transformation, and information visualization tools, will be configurable from the cyberservices (web and grid services, tools, ..) viewpoint to solve a wide range of important data mining problems. This effort will use modules, such as a new classification methods for the detection of high-risk geoscience events, and existing D2K data management, machine learning, and information visualization modules. A D2K cyberservices interface will be developed to seamlessly connect client applications with remote back-end D2K servers, providing computational resources for data mining and integration with local or remote data stores. This work is being coordinated with SDSC's data and services efforts. The new NCSA Visualization embedded workflow environment (NVIEW) will be integrated with D2K functionality to tightly couple informatics and scientific visualization with the data analysis and management services. Visualization services will access and filter disparate data sources, simplifying tasks such as fusing related data from distinct sources into a coherent visual representation. This approach enables collaboration among geographically dispersed researchers via portals and front-end clients, and the coupling with data management services enables recording associations among datasets and building annotation systems into visualization tools and portals, giving scientists a persistent, shareable, virtual lab notebook. To facilitate provision of these cyberservices to the national community, NCSA will be providing a computational environment for large-scale data assimilation, analysis, mining, and visualization. This will be initially implemented on the new 512 processor shared memory SGI's recently purchased by NCSA. In addition to standard batch capabilities, NCSA will provide on-demand capabilities for those projects requiring rapid response (e.g., development of severe weather, earthquake events) for decision makers. It will also be used for non-sequential interactive analysis of data sets where it is important have access to large data volumes over space and time.

  5. Viewing behavior and related clinical characteristics in a population of children with visual impairments in the Netherlands.

    PubMed

    Kooiker, M J G; Pel, J J M; van der Steen, J

    2014-06-01

    Children with visual impairments are very heterogeneous in terms of the extent of visual and developmental etiology. The aim of the present study was to investigate a possible correlation between prevalence of clinical risk factors of visual processing impairments and characteristics of viewing behavior. We tested 149 children with visual information processing impairments (90 boys, 59 girls; mean age (SD)=7.3 (3.3)) and 127 children without visual impairments (63 boys and 64 girls, mean age (SD)=7.9 (2.8)). Visual processing impairments were classified based on the time it took to complete orienting responses to various visual stimuli (form, contrast, motion detection, motion coherence, color and a cartoon). Within the risk group, children were divided into a fast, medium or slow group based on the response times to a highly salient stimulus. The relationship between group specific response times and clinical risk factors was assessed. The fast responding children in the risk group were significantly slower than children in the control group. Within the risk group, the prevalence of cerebral visual impairment, brain damage and intellectual disabilities was significantly higher in slow responding children compared to faster responding children. The presence of nystagmus, perceptual dysfunctions, mean visual acuity and mean age did not significantly differ between the subgroups. Orienting responses are related to risk factors for visual processing impairments known to be prevalent in visual rehabilitation practice. The proposed method may contribute to assessing the effectiveness of visual information processing in children. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    PubMed Central

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID:27164102

  7. Clustering of samples and variables with mixed-type data

    PubMed Central

    Edelmann, Dominic; Kopp-Schneider, Annette

    2017-01-01

    Analysis of data measured on different scales is a relevant challenge. Biomedical studies often focus on high-throughput datasets of, e.g., quantitative measurements. However, the need for integration of other features possibly measured on different scales, e.g. clinical or cytogenetic factors, becomes increasingly important. The analysis results (e.g. a selection of relevant genes) are then visualized, while adding further information, like clinical factors, on top. However, a more integrative approach is desirable, where all available data are analyzed jointly, and where also in the visualization different data sources are combined in a more natural way. Here we specifically target integrative visualization and present a heatmap-style graphic display. To this end, we develop and explore methods for clustering mixed-type data, with special focus on clustering variables. Clustering of variables does not receive as much attention in the literature as does clustering of samples. We extend the variables clustering methodology by two new approaches, one based on the combination of different association measures and the other on distance correlation. With simulation studies we evaluate and compare different clustering strategies. Applying specific methods for mixed-type data proves to be comparable and in many cases beneficial as compared to standard approaches applied to corresponding quantitative or binarized data. Our two novel approaches for mixed-type variables show similar or better performance than the existing methods ClustOfVar and bias-corrected mutual information. Further, in contrast to ClustOfVar, our methods provide dissimilarity matrices, which is an advantage, especially for the purpose of visualization. Real data examples aim to give an impression of various kinds of potential applications for the integrative heatmap and other graphical displays based on dissimilarity matrices. We demonstrate that the presented integrative heatmap provides more information than common data displays about the relationship among variables and samples. The described clustering and visualization methods are implemented in our R package CluMix available from https://cran.r-project.org/web/packages/CluMix. PMID:29182671

  8. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    PubMed

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-05-05

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

  9. Detecting glaucomatous change in visual fields: Analysis with an optimization framework.

    PubMed

    Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher

    2015-12-01

    Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Study on high-resolution representation of terraces in Shanxi Loess Plateau area

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ma, Lei

    2008-10-01

    A new elevation points sampling method, namely TIN-based Sampling Method (TSM) and a new visual method called Elevation Addition Method (EAM), are put forth for representing the typical terraces in Shanxi loess plateau area. The DEM Feature Points and Lines Classification (DEPLC) put forth by the authors in 2007 is perfected for depicting the main path in the study area. The EAM is used to visualize the terraces and the path in the study area. 406 key elevation points and 15 feature constrained lines sampled by this method are used to construct CD-TINs which can depict the terraces and path correctly and effectively. Our case study shows that the new sampling method called TSM is reasonable and feasible. The complicated micro-terrains like terraces and path can be represented with high resolution and high efficiency successfully by use of the perfected DEPLC, TSM and CD-TINs. And both the terraces and the main path are visualized very well by use of EAM even when the terrace height is not more than 1m.

  11. An interactive visualization tool for mobile objects

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tetsuo

    Recent advancements in mobile devices---such as Global Positioning System (GPS), cellular phones, car navigation system, and radio-frequency identification (RFID)---have greatly influenced the nature and volume of data about individual-based movement in space and time. Due to the prevalence of mobile devices, vast amounts of mobile objects data are being produced and stored in databases, overwhelming the capacity of traditional spatial analytical methods. There is a growing need for discovering unexpected patterns, trends, and relationships that are hidden in the massive mobile objects data. Geographic visualization (GVis) and knowledge discovery in databases (KDD) are two major research fields that are associated with knowledge discovery and construction. Their major research challenges are the integration of GVis and KDD, enhancing the ability to handle large volume mobile objects data, and high interactivity between the computer and users of GVis and KDD tools. This dissertation proposes a visualization toolkit to enable highly interactive visual data exploration for mobile objects datasets. Vector algebraic representation and online analytical processing (OLAP) are utilized for managing and querying the mobile object data to accomplish high interactivity of the visualization tool. In addition, reconstructing trajectories at user-defined levels of temporal granularity with time aggregation methods allows exploration of the individual objects at different levels of movement generality. At a given level of generality, individual paths can be combined into synthetic summary paths based on three similarity measures, namely, locational similarity, directional similarity, and geometric similarity functions. A visualization toolkit based on the space-time cube concept exploits these functionalities to create a user-interactive environment for exploring mobile objects data. Furthermore, the characteristics of visualized trajectories are exported to be utilized for data mining, which leads to the integration of GVis and KDD. Case studies using three movement datasets (personal travel data survey in Lexington, Kentucky, wild chicken movement data in Thailand, and self-tracking data in Utah) demonstrate the potential of the system to extract meaningful patterns from the otherwise difficult to comprehend collections of space-time trajectories.

  12. Interactive visualization and analysis of multimodal datasets for surgical applications.

    PubMed

    Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James

    2012-12-01

    Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.

  13. Efficient in-situ visualization of unsteady flows in climate simulation

    NASA Astrophysics Data System (ADS)

    Vetter, Michael; Olbrich, Stephan

    2017-04-01

    The simulation of climate data tends to produce very large data sets, which hardly can be processed in classical post-processing visualization applications. Typically, the visualization pipeline consisting of the processes data generation, visualization mapping and rendering is distributed into two parts over the network or separated via file transfer. Within most traditional post-processing scenarios the simulation is done on a supercomputer whereas the data analysis and visualization is done on a graphics workstation. That way temporary data sets with huge volume have to be transferred over the network, which leads to bandwidth bottlenecks and volume limitations. The solution to this issue is the avoidance of temporary storage, or at least significant reduction of data complexity. Within the Climate Visualization Lab - as part of the Cluster of Excellence "Integrated Climate System Analysis and Prediction" (CliSAP) at the University of Hamburg, in cooperation with the German Climate Computing Center (DKRZ) - we develop and integrate an in-situ approach. Our software framework DSVR is based on the separation of the process chain between the mapping and the rendering processes. It couples the mapping process directly to the simulation by calling methods of a parallelized data extraction library, which create a time-based sequence of geometric 3D scenes. This sequence is stored on a special streaming server with an interactive post-filtering option and then played-out asynchronously in a separate 3D viewer application. Since the rendering is part of this viewer application, the scenes can be navigated interactively. In contrast to other in-situ approaches where 2D images are created as part of the simulation or synchronous co-visualization takes place, our method supports interaction in 3D space and in time, as well as fixed frame rates. To integrate in-situ processing based on our DSVR framework and methods in the ICON climate model, we are continuously evolving the data structures and mapping algorithms of the framework to support the ICON model's native grid structures, since DSVR originally was designed for rectilinear grids only. We now have implemented a new output module to ICON to take advantage of the DSVR visualization. The visualization can be configured as most output modules by using a specific namelist and is exemplarily integrated within the non-hydrostatic atmospheric model time loop. With the integration of a DSVR based in-situ pathline extraction within ICON, a further milestone is reached. The pathline algorithm as well as the grid data structures have been optimized for the domain decomposition used for the parallelization of ICON based on MPI and OpenMP. The software implementation and evaluation is done on the supercomputers at DKRZ. In principle, the data complexity is reduced from O(n3) to O(m), where n is the grid resolution and m the number of supporting point of all pathlines. The stability and scalability evaluation is done using Atmospheric Model Intercomparison Project (AMIP) runs. We will give a short introduction in our software framework, as well as a short overview on the implementation and usage of DSVR within ICON. Furthermore, we will present visualization and evaluation results of sample applications.

  14. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  15. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2006-08-08

    A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.

  16. Computer systems and methods for the query and visualization of multidimensional database

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2010-05-11

    A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alekseev, I. S.; Ivanov, I. E.; Strelkov, P. S., E-mail: strelkov@fpl.gpi.ru

    A method based on the detection of emission of a dielectric screen with metal microinclusions in open air is applied to visualize the transverse structure of a high-power microwave beam. In contrast to other visualization techniques, the results obtained in this work provide qualitative information not only on the electric field strength, but also on the structure of electric field lines in the microwave beam cross section. The interpretation of the results obtained with this method is confirmed by numerical simulations of the structure of electric field lines in the microwave beam cross section by means of the CARAT code.

  18. Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    1990-01-01

    A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.

  19. Visual Aggregate Analysis of Eligibility Features of Clinical Trials

    PubMed Central

    He, Zhe; Carini, Simona; Sim, Ida; Weng, Chunhua

    2015-01-01

    Objective To develop a method for profiling the collective populations targeted for recruitment by multiple clinical studies addressing the same medical condition using one eligibility feature each time. Methods Using a previously published database COMPACT as the backend, we designed a scalable method for visual aggregate analysis of clinical trial eligibility features. This method consists of four modules for eligibility feature frequency analysis, query builder, distribution analysis, and visualization, respectively. This method is capable of analyzing (1) frequently used qualitative and quantitative features for recruiting subjects for a selected medical condition, (2) distribution of study enrollment on consecutive value points or value intervals of each quantitative feature, and (3) distribution of studies on the boundary values, permissible value ranges, and value range widths of each feature. All analysis results were visualized using Google Charts API. Five recruited potential users assessed the usefulness of this method for identifying common patterns in any selected eligibility feature for clinical trial participant selection. Results We implemented this method as a Web-based analytical system called VITTA (Visual Analysis Tool of Clinical Study Target Populations). We illustrated the functionality of VITTA using two sample queries involving quantitative features BMI and HbA1c for conditions “hypertension” and “Type 2 diabetes”, respectively. The recruited potential users rated the user-perceived usefulness of VITTA with an average score of 86.4/100. Conclusions We contributed a novel aggregate analysis method to enable the interrogation of common patterns in quantitative eligibility criteria and the collective target populations of multiple related clinical studies. A larger-scale study is warranted to formally assess the usefulness of VITTA among clinical investigators and sponsors in various therapeutic areas. PMID:25615940

  20. Exploring neighborhoods in the metagenome universe.

    PubMed

    Aßhauer, Kathrin P; Klingenberg, Heiner; Lingner, Thomas; Meinicke, Peter

    2014-07-14

    The variety of metagenomes in current databases provides a rapidly growing source of information for comparative studies. However, the quantity and quality of supplementary metadata is still lagging behind. It is therefore important to be able to identify related metagenomes by means of the available sequence data alone. We have studied efficient sequence-based methods for large-scale identification of similar metagenomes within a database retrieval context. In a broad comparison of different profiling methods we found that vector-based distance measures are well-suitable for the detection of metagenomic neighbors. Our evaluation on more than 1700 publicly available metagenomes indicates that for a query metagenome from a particular habitat on average nine out of ten nearest neighbors represent the same habitat category independent of the utilized profiling method or distance measure. While for well-defined labels a neighborhood accuracy of 100% can be achieved, in general the neighbor detection is severely affected by a natural overlap of manually annotated categories. In addition, we present results of a novel visualization method that is able to reflect the similarity of metagenomes in a 2D scatter plot. The visualization method shows a similarly high accuracy in the reduced space as compared with the high-dimensional profile space. Our study suggests that for inspection of metagenome neighborhoods the profiling methods and distance measures can be chosen to provide a convenient interpretation of results in terms of the underlying features. Furthermore, supplementary metadata of metagenome samples in the future needs to comply with readily available ontologies for fine-grained and standardized annotation. To make profile-based k-nearest-neighbor search and the 2D-visualization of the metagenome universe available to the research community, we included the proposed methods in our CoMet-Universe server for comparative metagenome analysis.

  1. Exploring Neighborhoods in the Metagenome Universe

    PubMed Central

    Aßhauer, Kathrin P.; Klingenberg, Heiner; Lingner, Thomas; Meinicke, Peter

    2014-01-01

    The variety of metagenomes in current databases provides a rapidly growing source of information for comparative studies. However, the quantity and quality of supplementary metadata is still lagging behind. It is therefore important to be able to identify related metagenomes by means of the available sequence data alone. We have studied efficient sequence-based methods for large-scale identification of similar metagenomes within a database retrieval context. In a broad comparison of different profiling methods we found that vector-based distance measures are well-suitable for the detection of metagenomic neighbors. Our evaluation on more than 1700 publicly available metagenomes indicates that for a query metagenome from a particular habitat on average nine out of ten nearest neighbors represent the same habitat category independent of the utilized profiling method or distance measure. While for well-defined labels a neighborhood accuracy of 100% can be achieved, in general the neighbor detection is severely affected by a natural overlap of manually annotated categories. In addition, we present results of a novel visualization method that is able to reflect the similarity of metagenomes in a 2D scatter plot. The visualization method shows a similarly high accuracy in the reduced space as compared with the high-dimensional profile space. Our study suggests that for inspection of metagenome neighborhoods the profiling methods and distance measures can be chosen to provide a convenient interpretation of results in terms of the underlying features. Furthermore, supplementary metadata of metagenome samples in the future needs to comply with readily available ontologies for fine-grained and standardized annotation. To make profile-based k-nearest-neighbor search and the 2D-visualization of the metagenome universe available to the research community, we included the proposed methods in our CoMet-Universe server for comparative metagenome analysis. PMID:25026170

  2. An Efficient Model-Based Image Understanding Method for an Autonomous Vehicle.

    DTIC Science & Technology

    1997-09-01

    The problem discussed in this dissertation is the development of an efficient method for visual navigation of autonomous vehicles . The approach is to... autonomous vehicles . Thus the new method is implemented as a component of the image-understanding system in the autonomous mobile robot Yamabico-11 at

  3. IMPROVED METHODS FOR HEPATITIS A VIRUS AND ROTAVIRUS CONCENTRATION AND DETECTION IN RECREATIONAL, RAW POTABLE, AND FINISHED WATERS

    EPA Science Inventory

    The report contains procedures for detecting rotaviruses based upon an immunofluorescence test using a monoclonal antibody and fluorescein-isothiocyanate-conjugated antibody staining method to visualize virus-infected cells. Also contained in the report are test methods for detec...

  4. Visual control of robots using range images.

    PubMed

    Pomares, Jorge; Gil, Pablo; Torres, Fernando

    2010-01-01

    In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  5. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  6. Local matrix learning in clustering and applications for manifold visualization.

    PubMed

    Arnonkijpanich, Banchar; Hasenfuss, Alexander; Hammer, Barbara

    2010-05-01

    Electronic data sets are increasing rapidly with respect to both, size of the data sets and data resolution, i.e. dimensionality, such that adequate data inspection and data visualization have become central issues of data mining. In this article, we present an extension of classical clustering schemes by local matrix adaptation, which allows a better representation of data by means of clusters with an arbitrary spherical shape. Unlike previous proposals, the method is derived from a global cost function. The focus of this article is to demonstrate the applicability of this matrix clustering scheme to low-dimensional data embedding for data inspection. The proposed method is based on matrix learning for neural gas and manifold charting. This provides an explicit mapping of a given high-dimensional data space to low dimensionality. We demonstrate the usefulness of this method for data inspection and manifold visualization. 2009 Elsevier Ltd. All rights reserved.

  7. Scene analysis for effective visual search in rough three-dimensional-modeling scenes

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Hu, Xiaopeng

    2016-11-01

    Visual search is a fundamental technology in the computer vision community. It is difficult to find an object in complex scenes when there exist similar distracters in the background. We propose a target search method in rough three-dimensional-modeling scenes based on a vision salience theory and camera imaging model. We give the definition of salience of objects (or features) and explain the way that salience measurements of objects are calculated. Also, we present one type of search path that guides to the target through salience objects. Along the search path, when the previous objects are localized, the search region of each subsequent object decreases, which is calculated through imaging model and an optimization method. The experimental results indicate that the proposed method is capable of resolving the ambiguities resulting from distracters containing similar visual features with the target, leading to an improvement of search speed by over 50%.

  8. Automated estimation of image quality for coronary computed tomographic angiography using machine learning.

    PubMed

    Nakanishi, Rine; Sankaran, Sethuraman; Grady, Leo; Malpeso, Jenifer; Yousfi, Razik; Osawa, Kazuhiro; Ceponiene, Indre; Nazarat, Negin; Rahmani, Sina; Kissel, Kendall; Jayawardena, Eranthi; Dailing, Christopher; Zarins, Christopher; Koo, Bon-Kwon; Min, James K; Taylor, Charles A; Budoff, Matthew J

    2018-03-23

    Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.

  9. Modified multidimensional scaling approach to analyze financial markets.

    PubMed

    Yin, Yi; Shang, Pengjian

    2014-06-01

    Detrended cross-correlation coefficient (σDCCA) and dynamic time warping (DTW) are introduced as the dissimilarity measures, respectively, while multidimensional scaling (MDS) is employed to translate the dissimilarities between daily price returns of 24 stock markets. We first propose MDS based on σDCCA dissimilarity and MDS based on DTW dissimilarity creatively, while MDS based on Euclidean dissimilarity is also employed to provide a reference for comparisons. We apply these methods in order to further visualize the clustering between stock markets. Moreover, we decide to confront MDS with an alternative visualization method, "Unweighed Average" clustering method, for comparison. The MDS analysis and "Unweighed Average" clustering method are employed based on the same dissimilarity. Through the results, we find that MDS gives us a more intuitive mapping for observing stable or emerging clusters of stock markets with similar behavior, while the MDS analysis based on σDCCA dissimilarity can provide more clear, detailed, and accurate information on the classification of the stock markets than the MDS analysis based on Euclidean dissimilarity. The MDS analysis based on DTW dissimilarity indicates more knowledge about the correlations between stock markets particularly and interestingly. Meanwhile, it reflects more abundant results on the clustering of stock markets and is much more intensive than the MDS analysis based on Euclidean dissimilarity. In addition, the graphs, originated from applying MDS methods based on σDCCA dissimilarity and DTW dissimilarity, may also guide the construction of multivariate econometric models.

  10. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  11. Beyond engagement in working with children in eight Nairobi slums to address safety, security, and housing: Digital tools for policy and community dialogue.

    PubMed

    Mitchell, Claudia; Chege, Fatuma; Maina, Lucy; Rothman, Margot

    2016-01-01

    This article studies the ways in which researchers working in the area of health and social research and using participatory visual methods might extend the reach of participant-generated creations such as photos and drawings to engage community leaders and policy-makers. Framed as going 'beyond engagement', the article explores the idea of the production of researcher-led digital dialogue tools, focusing on one example, based on a series of visual arts-based workshops with children from eight slums in Nairobi addressing issues of safety, security, and well-being in relation to housing. The authors conclude that there is a need for researchers to embark upon the use of visual tools to expand the life and use of visual productions, and in particular to ensure meaningful participation of communities in social change.

  12. Visual interface for space and terrestrial analysis

    NASA Technical Reports Server (NTRS)

    Dombrowski, Edmund G.; Williams, Jason R.; George, Arthur A.; Heckathorn, Harry M.; Snyder, William A.

    1995-01-01

    The management of large geophysical and celestial data bases is now, more than ever, the most critical path to timely data analysis. With today's large volume data sets from multiple satellite missions, analysts face the task of defining useful data bases from which data and metadata (information about data) can be extracted readily in a meaningful way. Visualization, following an object-oriented design, is a fundamental method of organizing and handling data. Humans, by nature, easily accept pictorial representations of data. Therefore graphically oriented user interfaces are appealing, as long as they remain simple to produce and use. The Visual Interface for Space and Terrestrial Analysis (VISTA) system, currently under development at the Naval Research Laboratory's Backgrounds Data Center (BDC), has been designed with these goals in mind. Its graphical user interface (GUI) allows the user to perform queries, visualization, and analysis of atmospheric and celestial backgrounds data.

  13. Showing the Unsayable: Participatory Visual Approaches and the Constitution of 'Patient Experience' in Healthcare Quality Improvement.

    PubMed

    Papoulias, Constantina

    2018-06-01

    This article considers the strengths and potential contributions of participatory visual methods for healthcare quality improvement research. It argues that such approaches may enable us to expand our understanding of 'patient experience' and of its potential for generating new knowledge for health systems. In particular, they may open up dimensions of people's engagement with services and treatments which exceed both the declarative nature of responses to questionnaires and the narrative sequencing of self reports gathered through qualitative interviewing. I will suggest that working with such methods may necessitate a more reflexive approach to the constitution of evidence in quality improvement work. To this end, the article will first consider the emerging rationale for the use of visual participatory methods in improvement before outlining the implications of two related approaches-photo-elicitation and PhotoVoice-for the constitution of 'experience'. It will then move to a participatory model for healthcare improvement work, Experience Based Co-Design (EBCD). It will argue that EBCD exemplifies both the strengths and the limitations of adequating visual participatory approaches to quality improvement ends. The article will conclude with a critical reflection on a small photographic study, in which the author participated, and which sought to harness service user perspectives for the design of psychiatric facilities, as a way of considering the potential contribution of visual participatory methods for quality improvement.

  14. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    PubMed

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  15. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2014-04-29

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  16. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA

    2011-02-01

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  17. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA

    2012-03-20

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  18. Simple and powerful visual stimulus generator.

    PubMed

    Kremlácek, J; Kuba, M; Kubová, Z; Vít, F

    1999-02-01

    We describe a cheap, simple, portable and efficient approach to visual stimulation for neurophysiology which does not need any special hardware equipment. The method based on an animation technique uses the FLI autodesk animator format. This form of the animation is replayed by a special program ('player') providing synchronisation pulses toward recording system via parallel port. The 'player is running on an IBM compatible personal computer under MS-DOS operation system and stimulus is displayed on a VGA computer monitor. Various stimuli created with this technique for visual evoked potentials (VEPs) are presented.

  19. Using VMD - An Introductory Tutorial

    PubMed Central

    Hsin, Jen; Arkhipov, Anton; Yin, Ying; Stone, John E.; Schulten, Klaus

    2010-01-01

    VMD (Visual Molecular Dynamics) is a molecular visualization and analysis program designed for biological systems such as proteins, nucleic acids, lipid bilayer assemblies, etc. This unit will serve as an introductory VMD tutorial. We will present several step-by-step examples of some of VMD’s most popular features, including visualizing molecules in three dimensions with different drawing and coloring methods, rendering publication-quality figures, animate and analyze the trajectory of a molecular dynamics simulation, scripting in the text-based Tcl/Tk interface, and analyzing both sequence and structure data for proteins. PMID:19085979

  20. Spectral analysis method and sample generation for real time visualization of speech

    NASA Astrophysics Data System (ADS)

    Hobohm, Klaus

    A method for translating speech signals into optical models, characterized by high sound discrimination and learnability and designed to provide to deaf persons a feedback towards control of their way of speaking, is presented. Important properties of speech production and perception processes and organs involved in these mechanisms are recalled in order to define requirements for speech visualization. It is established that the spectral representation of time, frequency and amplitude resolution of hearing must be fair and continuous variations of acoustic parameters of speech signal must be depicted by a continuous variation of images. A color table was developed for dynamic illustration and sonograms were generated with five spectral analysis methods such as Fourier transformations and linear prediction coding. For evaluating sonogram quality, test persons had to recognize consonant/vocal/consonant words and an optimized analysis method was achieved with a fast Fourier transformation and a postprocessor. A hardware concept of a real time speech visualization system, based on multiprocessor technology in a personal computer, is presented.

  1. Survey of computer vision technology for UVA navigation

    NASA Astrophysics Data System (ADS)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.

  2. A WebGIS-based system for analyzing and visualizing air quality data for Shanghai Municipality

    NASA Astrophysics Data System (ADS)

    Wang, Manyi; Liu, Chaoshun; Gao, Wei

    2014-10-01

    An online visual analytical system based on Java Web and WebGIS for air quality data for Shanghai Municipality was designed and implemented to quantitatively analyze and qualitatively visualize air quality data. By analyzing the architecture of WebGIS and Java Web, we firstly designed the overall scheme for system architecture, then put forward the software and hardware environment and also determined the main function modules for the system. The visual system was ultimately established with the DIV + CSS layout method combined with JSP, JavaScript, and some other computer programming languages based on the Java programming environment. Moreover, Struts, Spring, and Hibernate frameworks (SSH) were integrated in the system for the purpose of easy maintenance and expansion. To provide mapping service and spatial analysis functions, we selected ArcGIS for Server as the GIS server. We also used Oracle database and ESRI file geodatabase to store spatial data and non-spatial data in order to ensure the data security. In addition, the response data from the Web server are resampled to implement rapid visualization through the browser. The experimental successes indicate that this system can quickly respond to user's requests, and efficiently return the accurate processing results.

  3. Water surface modeling from a single viewpoint video.

    PubMed

    Li, Chuan; Pickup, David; Saunders, Thomas; Cosker, Darren; Marshall, David; Hall, Peter; Willis, Philip

    2013-07-01

    We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible animation using a single viewpoint video as the input. Our approach is based on two discoveries: first, shape from shading (SFS) is adequate to capture the appearance and dynamic behavior of the example water; second, shallow water model can be used to estimate a velocity field that produces complex surface dynamics. We will provide qualitative evaluation of our method and demonstrate its good performance across a wide range of scenes.

  4. Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming

    PubMed Central

    Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy

    2013-01-01

    Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148

  5. Pose estimation of industrial objects towards robot operation

    NASA Astrophysics Data System (ADS)

    Niu, Jie; Zhou, Fuqiang; Tan, Haishu; Cao, Yu

    2017-10-01

    With the advantages of wide range, non-contact and high flexibility, the visual estimation technology of target pose has been widely applied in modern industry, robot guidance and other engineering practices. However, due to the influence of complicated industrial environment, outside interference factors, lack of object characteristics, restrictions of camera and other limitations, the visual estimation technology of target pose is still faced with many challenges. Focusing on the above problems, a pose estimation method of the industrial objects is developed based on 3D models of targets. By matching the extracted shape characteristics of objects with the priori 3D model database of targets, the method realizes the recognition of target. Thus a pose estimation of objects can be determined based on the monocular vision measuring model. The experimental results show that this method can be implemented to estimate the position of rigid objects based on poor images information, and provides guiding basis for the operation of the industrial robot.

  6. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  7. Simultaneous reconstruction of 3D refractive index, temperature, and intensity distribution of combustion flame by double computed tomography technologies based on spatial phase-shifting method

    NASA Astrophysics Data System (ADS)

    Guo, Zhenyan; Song, Yang; Yuan, Qun; Wulan, Tuya; Chen, Lei

    2017-06-01

    In this paper, a transient multi-parameter three-dimensional (3D) reconstruction method is proposed to diagnose and visualize a combustion flow field. Emission and transmission tomography based on spatial phase-shifted technology are combined to reconstruct, simultaneously, the various physical parameter distributions of a propane flame. Two cameras triggered by the internal trigger mode capture the projection information of the emission and moiré tomography, respectively. A two-step spatial phase-shifting method is applied to extract the phase distribution in the moiré fringes. By using the filtered back-projection algorithm, we reconstruct the 3D refractive-index distribution of the combustion flow field. Finally, the 3D temperature distribution of the flame is obtained from the refractive index distribution using the Gladstone-Dale equation. Meanwhile, the 3D intensity distribution is reconstructed based on the radiation projections from the emission tomography. Therefore, the structure and edge information of the propane flame are well visualized.

  8. Stimulation of functional vision in children with perinatal brain damage.

    PubMed

    Alimović, Sonja; Mejaski-Bosnjak, Vlatka

    2011-01-01

    Cerebral visual impairment (CVI) is one of the most common causes of bilateral visual loss, which frequently occurs due to perinatal brain injury. Vision in early life has great impact on acquisition of basic comprehensions which are fundamental for further development. Therefore, early detection of visual problems and early intervention is necessary. The aim of the present study is to determine specific visual functioning of children with perinatal brain damage and the influence of visual stimulation on development of functional vision at early age of life. We initially assessed 30 children with perinatal brain damage up to 3 years of age, who were reffered to our pediatric low vision cabinet in "Little house" from child neurologists, ophthalmologists Type and degree of visual impairment was determined according to functional vision assessment of each child. On the bases of those assessments different kind of visual stimulations were carried out with children who have been identified to have a certain visual impairment. Through visual stimulation program some of the children were stimulated with light stimulus, some with different materials under the ultraviolet (UV) light, and some with bright color and high contrast materials. Children were also involved in program of early stimulation of overall sensory motor development. Goals and methods of therapy were determined individually, based on observation of child's possibilities and need. After one year of program, reassessment was done. Results for visual functions and functional vision were compared to evaluate the improvement of the vision development. These results have shown that there was significant improvement in functional vision, especially in visual attention and visual communication.

  9. Visual saliency detection based on in-depth analysis of sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Shen, Siqiu; Ning, Chen

    2018-03-01

    Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.

  10. PHYLOViZ: phylogenetic inference and data visualization for sequence based typing methods

    PubMed Central

    2012-01-01

    Background With the decrease of DNA sequencing costs, sequence-based typing methods are rapidly becoming the gold standard for epidemiological surveillance. These methods provide reproducible and comparable results needed for a global scale bacterial population analysis, while retaining their usefulness for local epidemiological surveys. Online databases that collect the generated allelic profiles and associated epidemiological data are available but this wealth of data remains underused and are frequently poorly annotated since no user-friendly tool exists to analyze and explore it. Results PHYLOViZ is platform independent Java software that allows the integrated analysis of sequence-based typing methods, including SNP data generated from whole genome sequence approaches, and associated epidemiological data. goeBURST and its Minimum Spanning Tree expansion are used for visualizing the possible evolutionary relationships between isolates. The results can be displayed as an annotated graph overlaying the query results of any other epidemiological data available. Conclusions PHYLOViZ is a user-friendly software that allows the combined analysis of multiple data sources for microbial epidemiological and population studies. It is freely available at http://www.phyloviz.net. PMID:22568821

  11. Development of CD3 cell quantitation algorithms for renal allograft biopsy rejection assessment utilizing open source image analysis software.

    PubMed

    Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad

    2018-02-01

    Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to < 0.0001). Methods for assessing inflammation suggested a progression through the tubulointerstitial ACR grades, with statistically different results in borderline versus other ACR types, in all but the custom methods. Assessment of CD3-stained slides using various open source image analysis algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.

  12. Simulated color: a diagnostic tool for skin lesions like port-wine stain

    NASA Astrophysics Data System (ADS)

    Randeberg, Lise L.; Svaasand, Lars O.

    2001-05-01

    A device independent method for skin color visualization has been developed. Colors reconstructed from a reflectance spectrum are presented on a computer screen by sRGB (standard Red Green Blue) color coordinates. The colors are presented as adjacent patches surrounded by a medium grey border. CIELAB color coordinates and CIE (International Commission on Illumination) color difference (Delta) E are computed. The change in skin color due to a change in average blood content or scattering properties in dermis is investigated. This is done by analytical simulations based on the diffusion approximation. It is found that an 11% change in average blood content and a 15% change in scattering properties will give a visible color change. A supposed visibility limit for (Delta) E is given. This value is based on experimental testing and the known properties of the human visual system. This limit value can be used as a tool to determine when to terminate laser treatment of port- wine stain due to low treatment response, i.e. low (Delta) E between treatments. The visualization method presented seems promising for medical applications as port-wine stain diagnostics. The method gives good possibilities for electronic transfer of data between clinics because it is device independent.

  13. 3D visualization of the scoliotic spine: longitudinal studies, data acquisition, and radiation dosage constraints

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Adler, Roy L.; Margulies, Joseph Y.; Tresser, Charles P.; Wu, Chai W.

    1999-05-01

    Decision making in the treatment of scoliosis is typically based on longitudinal studies that involve the imaging and visualization the progressive degeneration of a patient's spine over a period of years. Some patients will need surgery if their spinal deformation exceeds a certain degree of severity. Currently, surgeons rely on 2D measurements, obtained from x-rays, to quantify spinal deformation. Clearly working only with 2D measurements seriously limits the surgeon's ability to infer 3D spinal pathology. Standard CT scanning is not a practical solution for obtaining 3D spinal measurements of scoliotic patients. Because it would expose the patient to a prohibitively high dose of radiation. We have developed 2 new CT-based methods of 3D spinal visualization that produce 3D models of the spine by integrating a very small number of axial CT slices with data obtained from CT scout data. In the first method the scout data are converted to sinogram data, and then processed by a tomographic image reconstruction algorithm. In the second method, the vertebral boundaries are detected in the scout data, and these edges are then used as linear constraints to determine 2D convex hulls of the vertebrae.

  14. An odor identification approach based on event-related pupil dilation and gaze focus.

    PubMed

    Aguillon-Hernandez, Nadia; Naudin, Marine; Roché, Laëtitia; Bonnet-Brilhault, Frédérique; Belzung, Catherine; Martineau, Joëlle; Atanasova, Boriana

    2015-06-01

    Olfactory disorders constitute a potential marker of many diseases and are considered valuable clues to the diagnosis and evaluation of progression for many disorders. The most commonly used test for the evaluation of impairments of olfactory identification requires the active participation of the subject, who must select the correct name of the perceived odor from a list. An alternative method is required because speech may be impaired or not yet learned in many patients. As odor identification is known to be facilitated by searching for visual clues, we aimed to develop an objective, vision-based approach for the evaluation of odor identification. We used an eye tracking method to quantify pupillary and ocular responses during the simultaneous presentation of olfactory and visual stimuli, in 39 healthy participants aged from 19 to 77years. Odor presentation triggered an increase in pupil dilation and gaze focus on the picture corresponding to the odor presented. These results suggest that odorant stimuli increase recruitment of the sympathetic system (as demonstrated by the reactivity of the pupil) and draw attention to the visual clue. These results validate the objectivity of this method. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Interactive Visualization of DGA Data Based on Multiple Views

    NASA Astrophysics Data System (ADS)

    Geng, Yujie; Lin, Ying; Ma, Yan; Guo, Zhihong; Gu, Chao; Wang, Mingtao

    2017-01-01

    The commission and operation of dissolved gas analysis (DGA) online monitoring makes up for the weakness of traditional DGA method. However, volume and high-dimensional DGA data brings a huge challenge for monitoring and analysis. In this paper, we present a novel interactive visualization model of DGA data based on multiple views. This model imitates multi-angle analysis by combining parallel coordinates, scatter plot matrix and data table. By offering brush, collaborative filter and focus + context technology, this model provides a convenient and flexible interactive way to analyze and understand the DGA data.

  16. Comparison of genetic and visual identification of cisco and lake whitefish larvae from Chaumont Bay, Lake Ontario

    USGS Publications Warehouse

    George, Ellen M.; Hare, Matthew P.; Crabtree, Darran L.; Lantry, Brian F.; Rudstam, Lars G.

    2017-01-01

    Cisco Coregonus artedi are an important component of native food webs in the Great Lakes, and their restoration is instrumental to the recovery of lake trout Salvelinus namaycush and Atlantic salmon Salmo salar. Difficulties with visual identification of larvae can confound early life history surveys, as cisco are often difficult to distinguish from lake whitefish C. clupeaformis. We compared traditional visual species identification methods to genetic identifications based on barcoding of the mitochondrial cytochrome C oxidase I gene for 726 coregonine larvae caught in Chaumont Bay, Lake Ontario. We found little agreement between the visual characteristics of cisco identified by genetic barcoding and the most widely used dichotomous key, and the considerable overlap in ranges of traditionally utilized metrics suggest that visual identification of coregonine larvae from Chaumont Bay is impractical. Coregonines are highly variable and plastic species, and often display wide variations in morphometric characteristics across their broad range. This study highlights the importance of developing accurate, geographically appropriate larval identification methods in order to best inform cisco restoration and management efforts.

  17. Visual preference and ecological assessments for designed alternative brownfield rehabilitations.

    PubMed

    Lafortezza, Raffaele; Corry, Robert C; Sanesi, Giovanni; Brown, Robert D

    2008-11-01

    This paper describes an integrative method for quantifying, analyzing, and comparing the effects of alternative rehabilitation approaches with visual preference. The method was applied to a portion of a major industrial area located in southern Italy. Four alternative approaches to rehabilitation (alternative designs) were developed and analyzed. The scenarios consisted of the cleanup of the brownfields plus: (1) the addition of ground cover species; (2) the addition of ground cover species and a few trees randomly distributed; (3) the addition of ground cover species and a few trees in small groups; and (4) the addition of ground cover species and several trees in large groups. The approaches were analyzed and compared to the baseline condition through the use of cost-surface modeling (CSM) and visual preference assessment (VPA). Statistical results showed that alternatives that were more ecologically functional for forest bird species dispersal were also more visually preferable. Some differences were identified based on user groups and location of residence. The results of the study are used to identify implications for enhancing both ecological attributes and visual preferences of rehabilitating landscapes through planning and design.

  18. A comparison of visual search strategies of elite and non-elite tennis players through cluster analysis.

    PubMed

    Murray, Nicholas P; Hunfalvay, Melissa

    2017-02-01

    Considerable research has documented that successful performance in interceptive tasks (such as return of serve in tennis) is based on the performers' capability to capture appropriate anticipatory information prior to the flight path of the approaching object. Athletes of higher skill tend to fixate on different locations in the playing environment prior to initiation of a skill than their lesser skilled counterparts. The purpose of this study was to examine visual search behaviour strategies of elite (world ranked) tennis players and non-ranked competitive tennis players (n = 43) utilising cluster analysis. The results of hierarchical (Ward's method) and nonhierarchical (k means) cluster analyses revealed three different clusters. The clustering method distinguished visual behaviour of high, middle-and low-ranked players. Specifically, high-ranked players demonstrated longer mean fixation duration and lower variation of visual search than middle-and low-ranked players. In conclusion, the results demonstrated that cluster analysis is a useful tool for detecting and analysing the areas of interest for use in experimental analysis of expertise and to distinguish visual search variables among participants'.

  19. Training for cervical cancer prevention programs in low-resource settings: focus on visual inspection with acetic acid and cryotherapy.

    PubMed

    Blumenthal, P D; Lauterbach, M; Sellors, J W; Sankaranarayanan, R

    2005-05-01

    The modern approach to cervical cancer prevention, characterized by use of cytology and multiple visits for diagnosis and treatment, has frequently proven challenging and unworkable in low-resource settings. Because of this, the Alliance for Cervical Cancer Prevention (ACCP) has made it a priority to investigate and assess alternative approaches, particularly the use of visual screening methods, such as visual inspection with acetic acid (VIA) and visual inspection with Lugol's iodine (VILI), for precancer and cancer detection and the use of cryotherapy as a precancer treatment method. As a result of ACCP experience in providing training to nurses and doctors in these techniques, it is now widely agreed that training should be competency based, combining both didactic and hands-on approaches, and should be done in a clinical setting that resembles the service-delivery conditions at the program site. This article reviews ACCP experiences and perceptions about the essentials of training in visual inspection and cryotherapy and presents some lessons learned with regard to training in these techniques in low-resource settings.

  20. Visualization assisted by parallel processing

    NASA Astrophysics Data System (ADS)

    Lange, B.; Rey, H.; Vasques, X.; Puech, W.; Rodriguez, N.

    2011-01-01

    This paper discusses the experimental results of our visualization model for data extracted from sensors. The objective of this paper is to find a computationally efficient method to produce a real time rendering visualization for a large amount of data. We develop visualization method to monitor temperature variance of a data center. Sensors are placed on three layers and do not cover all the room. We use particle paradigm to interpolate data sensors. Particles model the "space" of the room. In this work we use a partition of the particle set, using two mathematical methods: Delaunay triangulation and Voronoý cells. Avis and Bhattacharya present these two algorithms in. Particles provide information on the room temperature at different coordinates over time. To locate and update particles data we define a computational cost function. To solve this function in an efficient way, we use a client server paradigm. Server computes data and client display this data on different kind of hardware. This paper is organized as follows. The first part presents related algorithm used to visualize large flow of data. The second part presents different platforms and methods used, which was evaluated in order to determine the better solution for the task proposed. The benchmark use the computational cost of our algorithm that formed based on located particles compared to sensors and on update of particles value. The benchmark was done on a personal computer using CPU, multi core programming, GPU programming and hybrid GPU/CPU. GPU programming method is growing in the research field; this method allows getting a real time rendering instates of a precompute rendering. For improving our results, we compute our algorithm on a High Performance Computing (HPC), this benchmark was used to improve multi-core method. HPC is commonly used in data visualization (astronomy, physic, etc) for improving the rendering and getting real-time.

  1. Search for Patterns of Functional Specificity in the Brain: A Nonparametric Hierarchical Bayesian Model for Group fMRI Data

    PubMed Central

    Sridharan, Ramesh; Vul, Edward; Hsieh, Po-Jang; Kanwisher, Nancy; Golland, Polina

    2012-01-01

    Functional MRI studies have uncovered a number of brain areas that demonstrate highly specific functional patterns. In the case of visual object recognition, small, focal regions have been characterized with selectivity for visual categories such as human faces. In this paper, we develop an algorithm that automatically learns patterns of functional specificity from fMRI data in a group of subjects. The method does not require spatial alignment of functional images from different subjects. The algorithm is based on a generative model that comprises two main layers. At the lower level, we express the functional brain response to each stimulus as a binary activation variable. At the next level, we define a prior over sets of activation variables in all subjects. We use a Hierarchical Dirichlet Process as the prior in order to learn the patterns of functional specificity shared across the group, which we call functional systems, and estimate the number of these systems. Inference based on our model enables automatic discovery and characterization of dominant and consistent functional systems. We apply the method to data from a visual fMRI study comprised of 69 distinct stimulus images. The discovered system activation profiles correspond to selectivity for a number of image categories such as faces, bodies, and scenes. Among systems found by our method, we identify new areas that are deactivated by face stimuli. In empirical comparisons with perviously proposed exploratory methods, our results appear superior in capturing the structure in the space of visual categories of stimuli. PMID:21884803

  2. Using false colors to protect visual privacy of sensitive content

    NASA Astrophysics Data System (ADS)

    Ćiftçi, Serdar; Korshunov, Pavel; Akyüz, Ahmet O.; Ebrahimi, Touradj

    2015-03-01

    Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (50 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly

  3. Rapid assessment of urban wetlands: Do hydrogeomorpic classification and reference criteria work?

    EPA Science Inventory

    The Hydrogeomorphic (HGM) functional assessment method is predicated on the ability of a wetland classification method based on hydrology (HGM classification) and a visual assessment of disturbance and alteration to provide reference standards against which functions in individua...

  4. Producing Curious Affects: Visual Methodology as an Affecting and Conflictual Wunderkammer

    ERIC Educational Resources Information Center

    Staunaes, Dorthe; Kofoed, Jette

    2015-01-01

    Digital video cameras, smartphones, internet and iPads are increasingly used as visual research methods with the purpose of creating an affective corpus of data. Such visual methods are often combined with interviews or observations. Not only are visual methods part of the used research methods, the visual products are used as requisites in…

  5. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    PubMed

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  6. Principal axis-based correspondence between multiple cameras for people tracking.

    PubMed

    Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve

    2006-04-01

    Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.

  7. Visual aggregate analysis of eligibility features of clinical trials.

    PubMed

    He, Zhe; Carini, Simona; Sim, Ida; Weng, Chunhua

    2015-04-01

    To develop a method for profiling the collective populations targeted for recruitment by multiple clinical studies addressing the same medical condition using one eligibility feature each time. Using a previously published database COMPACT as the backend, we designed a scalable method for visual aggregate analysis of clinical trial eligibility features. This method consists of four modules for eligibility feature frequency analysis, query builder, distribution analysis, and visualization, respectively. This method is capable of analyzing (1) frequently used qualitative and quantitative features for recruiting subjects for a selected medical condition, (2) distribution of study enrollment on consecutive value points or value intervals of each quantitative feature, and (3) distribution of studies on the boundary values, permissible value ranges, and value range widths of each feature. All analysis results were visualized using Google Charts API. Five recruited potential users assessed the usefulness of this method for identifying common patterns in any selected eligibility feature for clinical trial participant selection. We implemented this method as a Web-based analytical system called VITTA (Visual Analysis Tool of Clinical Study Target Populations). We illustrated the functionality of VITTA using two sample queries involving quantitative features BMI and HbA1c for conditions "hypertension" and "Type 2 diabetes", respectively. The recruited potential users rated the user-perceived usefulness of VITTA with an average score of 86.4/100. We contributed a novel aggregate analysis method to enable the interrogation of common patterns in quantitative eligibility criteria and the collective target populations of multiple related clinical studies. A larger-scale study is warranted to formally assess the usefulness of VITTA among clinical investigators and sponsors in various therapeutic areas. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Unsupervised Structure Detection in Biomedical Data.

    PubMed

    Vogt, Julia E

    2015-01-01

    A major challenge in computational biology is to find simple representations of high-dimensional data that best reveal the underlying structure. In this work, we present an intuitive and easy-to-implement method based on ranked neighborhood comparisons that detects structure in unsupervised data. The method is based on ordering objects in terms of similarity and on the mutual overlap of nearest neighbors. This basic framework was originally introduced in the field of social network analysis to detect actor communities. We demonstrate that the same ideas can successfully be applied to biomedical data sets in order to reveal complex underlying structure. The algorithm is very efficient and works on distance data directly without requiring a vectorial embedding of data. Comprehensive experiments demonstrate the validity of this approach. Comparisons with state-of-the-art clustering methods show that the presented method outperforms hierarchical methods as well as density based clustering methods and model-based clustering. A further advantage of the method is that it simultaneously provides a visualization of the data. Especially in biomedical applications, the visualization of data can be used as a first pre-processing step when analyzing real world data sets to get an intuition of the underlying data structure. We apply this model to synthetic data as well as to various biomedical data sets which demonstrate the high quality and usefulness of the inferred structure.

  9. Quality-Related Monitoring and Grading of Granulated Products by Weibull-Distribution Modeling of Visual Images with Semi-Supervised Learning.

    PubMed

    Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong

    2016-06-29

    The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images' spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines.

  10. Clouding tracing: Visualization of the mixing of fluid elements in convection-diffusion systems

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Smith, Philip J.

    1993-01-01

    This paper describes a highly interactive method for computer visualization of the basic physical process of dispersion and mixing of fluid elements in convection-diffusion systems. It is based on transforming the vector field from a traditionally Eulerian reference frame into a Lagrangian reference frame. Fluid elements are traced through the vector field for the mean path as well as the statistical dispersion of the fluid elements about the mean position by using added scalar information about the root mean square value of the vector field and its Lagrangian time scale. In this way, clouds of fluid elements are traced and are not just mean paths. We have used this method to visualize the simulation of an industrial incinerator to help identify mechanisms for poor mixing.

  11. Diffusion accessibility as a method for visualizing macromolecular surface geometry.

    PubMed

    Tsai, Yingssu; Holton, Thomas; Yeates, Todd O

    2015-10-01

    Important three-dimensional spatial features such as depth and surface concavity can be difficult to convey clearly in the context of two-dimensional images. In the area of macromolecular visualization, the computer graphics technique of ray-tracing can be helpful, but further techniques for emphasizing surface concavity can give clearer perceptions of depth. The notion of diffusion accessibility is well-suited for emphasizing such features of macromolecular surfaces, but a method for calculating diffusion accessibility has not been made widely available. Here we make available a web-based platform that performs the necessary calculation by solving the Laplace equation for steady state diffusion, and produces scripts for visualization that emphasize surface depth by coloring according to diffusion accessibility. The URL is http://services.mbi.ucla.edu/DiffAcc/. © 2015 The Protein Society.

  12. A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning.

    PubMed

    Suemitsu, Atsuo; Dang, Jianwu; Ito, Takayuki; Tiede, Mark

    2015-10-01

    Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.

  13. State Recognition and Visualization of Hoisting Motor of Quayside Container Crane Based on SOFM

    NASA Astrophysics Data System (ADS)

    Yang, Z. Q.; He, P.; Tang, G.; Hu, X.

    2017-07-01

    The neural network structure and algorithm of self-organizing feature map (SOFM) are researched and analysed. The method is applied to state recognition and visualization of the quayside container crane hoisting motor. By using SOFM, the clustering and visualization of attribute reduction of data are carried out, and three kinds motor states are obtained with Root Mean Square(RMS), Impulse Index and Margin Index, and the simulation visualization interface is realized by MATLAB. Through the processing of the sample data, it can realize the accurate identification of the motor state, thus provide better monitoring of the quayside container crane hoisting motor and a new way for the mechanical state recognition.

  14. Comparison of water production rates from UV spectroscopy and visual magnitudes for some recent comets

    NASA Technical Reports Server (NTRS)

    Roettger, E. E.; Feldman, P. D.; A'Hearn, M. F.; Festou, M. C.

    1990-01-01

    IUE data on the UV and visible coma emissions of the comets Bradfield, P/Tempel 2, Wilson, and P/Halley, are presently compared with the visual lightcurves from magnitudes reported in the IAU circulars to consider the temporal evolution of these comets. While the water-production rates obtainable from visual magnitudes on the basis of Newburn's (1984) method are consistent with OH-derived rates to first order, they are sometimes either displaced or unable to exhibit the same pre/postperihelion asymmetry. The best agreement is obtained for the relatively dust-free Comet P/Tempel 2. IUE Fine Error Sensor lightcurves are generally in agreement with curves based on total visual magnitude.

  15. Central corneal thickness and progression of the visual field and optic disc in glaucoma

    PubMed Central

    Chauhan, B C; Hutchison, D M; LeBlanc, R P; Artes, P H; Nicolela, M T

    2005-01-01

    Aims: To determine whether central corneal thickness (CCT) is a significant predictor of visual field and optic disc progression in open angle glaucoma. Methods: Data were obtained from a prospective study of glaucoma patients tested with static automated perimetry and confocal scanning laser tomography every 6 months. Progression was determined using a trend based approach called evidence of change (EOC) analysis in which sectoral ordinal scores based on the significance of regression coefficients of visual field pattern deviation and neuroretinal rim area over time are summed. Visual field progression was also determined using the event based glaucoma change probability (GCP) analysis using both total and pattern deviation. Results: The sample contained 101 eyes of 54 patients (mean (SD) age 56.5 (9.8) years) with a mean follow up of 9.2 (0.7) years and 20.7 (2.3) sets of examinations every 6 months. Lower CCT was associated with worse baseline visual fields and lower mean IOP in the follow up. In the longitudinal analysis CCT was not correlated with the EOC scores for visual field or optic disc change. In the GCP analyses, there was a tendency for groups classified as progressing to have lower CCT compared to non-progressing groups. In a multivariate analyses accounting for IOP, the opposite was found, whereby higher CCT was associated with visual field progression. None of the independent factors were predictive of optic disc progression. Conclusions: In this cohort of patients with established glaucoma, CCT was not a useful index in the risk assessment of visual field and optic disc progression. PMID:16024855

  16. How accurate are interpretations of curriculum-based measurement progress monitoring data? Visual analysis versus decision rules.

    PubMed

    Van Norman, Ethan R; Christ, Theodore J

    2016-10-01

    Curriculum based measurement of oral reading (CBM-R) is used to monitor the effects of academic interventions for individual students. Decisions to continue, modify, or terminate these interventions are made by interpreting time series CBM-R data. Such interpretation is founded upon visual analysis or the application of decision rules. The purpose of this study was to compare the accuracy of visual analysis and decision rules. Visual analysts interpreted 108 CBM-R progress monitoring graphs one of three ways: (a) without graphic aids, (b) with a goal line, or (c) with a goal line and a trend line. Graphs differed along three dimensions, including trend magnitude, variability of observations, and duration of data collection. Automated trend line and data point decision rules were also applied to each graph. Inferential analyses permitted the estimation of the probability of a correct decision (i.e., the student is improving - continue the intervention, or the student is not improving - discontinue the intervention) for each evaluation method as a function of trend magnitude, variability of observations, and duration of data collection. All evaluation methods performed better when students made adequate progress. Visual analysis and decision rules performed similarly when observations were less variable. Results suggest that educators should collect data for more than six weeks, take steps to control measurement error, and visually analyze graphs when data are variable. Implications for practice and research are discussed. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  17. Reproducibility of The Abdominal and Chest Wall Position by Voluntary Breath-Hold Technique Using a Laser-Based Monitoring and Visual Feedback System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, Katsumasa; Shioyama, Yoshiyuki; Nomoto, Satoru

    2007-05-01

    Purpose: The voluntary breath-hold (BH) technique is a simple method to control the respiration-related motion of a tumor during irradiation. However, the abdominal and chest wall position may not be accurately reproduced using the BH technique. The purpose of this study was to examine whether visual feedback can reduce the fluctuation in wall motion during BH using a new respiratory monitoring device. Methods and Materials: We developed a laser-based BH monitoring and visual feedback system. For this study, five healthy volunteers were enrolled. The volunteers, practicing abdominal breathing, performed shallow end-expiration BH (SEBH), shallow end-inspiration BH (SIBH), and deep end-inspirationmore » BH (DIBH) with or without visual feedback. The abdominal and chest wall positions were measured at 80-ms intervals during BHs. Results: The fluctuation in the chest wall position was smaller than that of the abdominal wall position. The reproducibility of the wall position was improved by visual feedback. With a monitoring device, visual feedback reduced the mean deviation of the abdominal wall from 2.1 {+-} 1.3 mm to 1.5 {+-} 0.5 mm, 2.5 {+-} 1.9 mm to 1.1 {+-} 0.4 mm, and 6.6 {+-} 2.4 mm to 2.6 {+-} 1.4 mm in SEBH, SIBH, and DIBH, respectively. Conclusions: Volunteers can perform the BH maneuver in a highly reproducible fashion when informed about the position of the wall, although in the case of DIBH, the deviation in the wall position remained substantial.« less

  18. Disruption of functional networks in dyslexia: A whole-brain, data-driven analysis of connectivity

    PubMed Central

    Finn, Emily S.; Shen, Xilin; Holahan, John M.; Scheinost, Dustin; Lacadie, Cheryl; Papademetris, Xenophon; Shaywitz, Sally E.; Shaywitz, Bennett A.; Constable, R. Todd

    2013-01-01

    Background Functional connectivity analyses of fMRI data are a powerful tool for characterizing brain networks and how they are disrupted in neural disorders. However, many such analyses examine only one or a small number of a priori seed regions. Studies that consider the whole brain frequently rely on anatomic atlases to define network nodes, which may result in mixing distinct activation timecourses within a single node. Here, we improve upon previous methods by using a data-driven brain parcellation to compare connectivity profiles of dyslexic (DYS) versus non-impaired (NI) readers in the first whole-brain functional connectivity analysis of dyslexia. Methods Whole-brain connectivity was assessed in children (n = 75; 43 NI, 32 DYS) and adult (n = 104; 64 NI, 40 DYS) readers. Results Compared to NI readers, DYS readers showed divergent connectivity within the visual pathway and between visual association areas and prefrontal attention areas; increased right-hemisphere connectivity; reduced connectivity in the visual word-form area (part of the left fusiform gyrus specialized for printed words); and persistent connectivity to anterior language regions around the inferior frontal gyrus. Conclusions Together, findings suggest that NI readers are better able to integrate visual information and modulate their attention to visual stimuli, allowing them to recognize words based on their visual properties, while DYS readers recruit altered reading circuits and rely on laborious phonology-based “sounding out” strategies into adulthood. These results deepen our understanding of the neural basis of dyslexia and highlight the importance of synchrony between diverse brain regions for successful reading. PMID:24124929

  19. Instant, Visual, and Instrument-Free Method for On-Site Screening of GTS 40-3-2 Soybean Based on Body-Heat Triggered Recombinase Polymerase Amplification.

    PubMed

    Wang, Rui; Zhang, Fang; Wang, Liu; Qian, Wenjuan; Qian, Cheng; Wu, Jian; Ying, Yibin

    2017-04-18

    On-site monitoring the plantation of genetically modified (GM) crops is of critical importance in agriculture industry throughout the world. In this paper, a simple, visual and instrument-free method for instant on-site detection of GTS 40-3-2 soybean has been developed. It is based on body-heat recombinase polymerase amplification (RPA) and followed with naked-eye detection via fluorescent DNA dye. Combining with extremely simplified sample preparation, the whole detection process can be accomplished within 10 min and the fluorescent results can be photographed by an accompanied smart phone. Results demonstrated a 100% detection rate for screening of practical GTS 40-3-2 soybean samples by 20 volunteers under different ambient temperatures. This method is not only suitable for on-site detection of GM crops but also demonstrates great potential to be applied in other fields.

  20. Study on visual detection method for wind turbine blade failure

    NASA Astrophysics Data System (ADS)

    Chen, Jianping; Shen, Zhenteng

    2018-02-01

    Start your abstract here…At present, the non-destructive testing methods of the wind turbine blades has fiber bragg grating, sound emission and vibration detection, but there are all kinds of defects, and the engineering application is difficult. In this regard, three-point slope deviation method, which is a kind of visual inspection method, is proposed for monitoring the running status of wind turbine blade based on the image processing technology. A better blade image can be got through calibration, image splicing, pretreatment and threshold segmentation algorithm. Design of the early warning system to monitor wind turbine blade running condition, recognition rate, stability and impact factors of the method were statistically analysed. The experimental results shown showed that it has highly accurate and good monitoring effect.

Top