Sample records for visual information obtained

  1. Accurately Decoding Visual Information from fMRI Data Obtained in a Realistic Virtual Environment

    DTIC Science & Technology

    2015-06-09

    Center for Learning and Memory , The University of Texas at Austin, 100 E 24th Street, Stop C7000, Austin, TX 78712, USA afloren@utexas.edu Received: 18...information from fMRI data obtained in a realistic virtual environment. Front. Hum. Neurosci. 9:327. doi: 10.3389/fnhum.2015.00327 Accurately decoding...visual information from fMRI data obtained in a realistic virtual environment Andrew Floren 1*, Bruce Naylor 2, Risto Miikkulainen 3 and David Ress 4

  2. Research on robot mobile obstacle avoidance control based on visual information

    NASA Astrophysics Data System (ADS)

    Jin, Jiang

    2018-03-01

    Robots to detect obstacles and control robots to avoid obstacles has been a key research topic of robot control. In this paper, a scheme of visual information acquisition is proposed. By judging visual information, the visual information is transformed into the information source of path processing. In accordance with the established route, in the process of encountering obstacles, the algorithm real-time adjustment trajectory to meet the purpose of intelligent control of mobile robots. Simulation results show that, through the integration of visual sensing information, the obstacle information is fully obtained, while the real-time and accuracy of the robot movement control is guaranteed.

  3. Halftone visual cryptography.

    PubMed

    Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni

    2006-08-01

    Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.

  4. Visual control of robots using range images.

    PubMed

    Pomares, Jorge; Gil, Pablo; Torres, Fernando

    2010-01-01

    In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  5. Image gathering and restoration - Information and visual quality

    NASA Technical Reports Server (NTRS)

    Mccormick, Judith A.; Alter-Gartenberg, Rachel; Huck, Friedrich O.

    1989-01-01

    A method is investigated for optimizing the end-to-end performance of image gathering and restoration for visual quality. To achieve this objective, one must inevitably confront the problems that the visual quality of restored images depends on perceptual rather than mathematical considerations and that these considerations vary with the target, the application, and the observer. The method adopted in this paper is to optimize image gathering informationally and to restore images interactively to obtain the visually preferred trade-off among fidelity resolution, sharpness, and clarity. The results demonstrate that this method leads to significant improvements in the visual quality obtained by the traditional digital processing methods. These traditional methods allow a significant loss of visual quality to occur because they treat the design of the image-gathering system and the formulation of the image-restoration algorithm as two separate tasks and fail to account for the transformations between the continuous and the discrete representations in image gathering and reconstruction.

  6. Visualization index for image-enabled medical records

    NASA Astrophysics Data System (ADS)

    Dong, Wenjie; Zheng, Weilin; Sun, Jianyong; Zhang, Jianguo

    2011-03-01

    With the widely use of healthcare information technology in hospitals, the patients' medical records are more and more complex. To transform the text- or image-based medical information into easily understandable and acceptable form for human, we designed and developed an innovation indexing method which can be used to assign an anatomical 3D structure object to every patient visually to store indexes of the patients' basic information, historical examined image information and RIS report information. When a doctor wants to review patient historical records, he or she can first load the anatomical structure object and the view the 3D index of this object using a digital human model tool kit. This prototype system helps doctors to easily and visually obtain the complete historical healthcare status of patients, including large amounts of medical data, and quickly locate detailed information, including both reports and images, from medical information systems. In this way, doctors can save time that may be better used to understand information, obtain a more comprehensive understanding of their patients' situations, and provide better healthcare services to patients.

  7. Mobile medical visual information retrieval.

    PubMed

    Depeursinge, Adrien; Duc, Samuel; Eggel, Ivan; Müller, Henning

    2012-01-01

    In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical information retrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.

  8. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity

    PubMed Central

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. PMID:28912739

  9. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity.

    PubMed

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-Jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available.

  10. Audiovisual integration of emotional signals from music improvisation does not depend on temporal correspondence.

    PubMed

    Petrini, Karin; McAleer, Phil; Pollick, Frank

    2010-04-06

    In the present study we applied a paradigm often used in face-voice affect perception to solo music improvisation to examine how the emotional valence of sound and gesture are integrated when perceiving an emotion. Three brief excerpts expressing emotion produced by a drummer and three by a saxophonist were selected. From these bimodal congruent displays the audio-only, visual-only, and audiovisually incongruent conditions (obtained by combining the two signals both within and between instruments) were derived. In Experiment 1 twenty musical novices judged the perceived emotion and rated the strength of each emotion. The results indicate that sound dominated the visual signal in the perception of affective expression, though this was more evident for the saxophone. In Experiment 2 a further sixteen musical novices were asked to either pay attention to the musicians' movements or to the sound when judging the perceived emotions. The results showed no effect of visual information when judging the sound. On the contrary, when judging the emotional content of the visual information, a worsening in performance was obtained for the incongruent condition that combined different emotional auditory and visual information for the same instrument. The effect of emotionally discordant information thus became evident only when the auditory and visual signals belonged to the same categorical event despite their temporal mismatch. This suggests that the integration of emotional information may be reinforced by its semantic attributes but might be independent from temporal features. Copyright 2010 Elsevier B.V. All rights reserved.

  11. The Location of Sources of Human Computer Processed Cerebral Potentials for the Automated Assessment of Visual Field Impairment

    PubMed Central

    Leisman, Gerald; Ashkenazi, Maureen

    1979-01-01

    Objective psychophysical techniques for investigating visual fields are described. The paper concerns methods for the collection and analysis of evoked potentials using a small laboratory computer and provides efficient methods for obtaining information about the conduction pathways of the visual system.

  12. Relationship Between Optimal Gain and Coherence Zone in Flight Simulation

    NASA Technical Reports Server (NTRS)

    Gracio, Bruno Jorge Correia; Pais, Ana Rita Valente; vanPaassen, M. M.; Mulder, Max; Kely, Lon C.; Houck, Jacob A.

    2011-01-01

    In motion simulation the inertial information generated by the motion platform is most of the times different from the visual information in the simulator displays. This occurs due to the physical limits of the motion platform. However, for small motions that are within the physical limits of the motion platform, one-to-one motion, i.e. visual information equal to inertial information, is possible. It has been shown in previous studies that one-to-one motion is often judged as too strong, causing researchers to lower the inertial amplitude. When trying to measure the optimal inertial gain for a visual amplitude, we found a zone of optimal gains instead of a single value. Such result seems related with the coherence zones that have been measured in flight simulation studies. However, the optimal gain results were never directly related with the coherence zones. In this study we investigated whether the optimal gain measurements are the same as the coherence zone measurements. We also try to infer if the results obtained from the two measurements can be used to differentiate between simulators with different configurations. An experiment was conducted at the NASA Langley Research Center which used both the Cockpit Motion Facility and the Visual Motion Simulator. The results show that the inertial gains obtained with the optimal gain are different than the ones obtained with the coherence zone measurements. The optimal gain is within the coherence zone.The point of mean optimal gain was lower and further away from the one-to-one line than the point of mean coherence. The zone width obtained for the coherence zone measurements was dependent on the visual amplitude and frequency. For the optimal gain, the zone width remained constant when the visual amplitude and frequency were varied. We found no effect of the simulator configuration in both the coherence zone and optimal gain measurements.

  13. Simulated visual field loss does not alter turning coordination in healthy young adults.

    PubMed

    Murray, Nicholas G; Ponce de Leon, Marlina; Ambati, V N Pradeep; Saucedo, Fabricio; Kennedy, Evan; Reed-Jones, Rebecca J

    2014-01-01

    Turning, while walking, is an important component of adaptive locomotion. Current hypotheses regarding the motor control of body segment coordination during turning suggest heavy influence of visual information. The authors aimed to examine whether visual field impairment (central loss or peripheral loss) affects body segment coordination during walking turns in healthy young adults. No significant differences in the onset time of segments or intersegment coordination were observed because of visual field occlusion. These results suggest that healthy young adults can use visual information obtained from central and peripheral visual fields interchangeably, pointing to flexibility of visuomotor control in healthy young adults. Further study in populations with chronic visual impairment and those with turning difficulties are warranted.

  14. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    DOEpatents

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA

    2008-05-13

    A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  15. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    DOEpatents

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA

    2012-03-06

    A method of displaying correlations among information objects includes receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  16. PROCRU: A model for analyzing crew procedures in approach to landing

    NASA Technical Reports Server (NTRS)

    Baron, S.; Muralidharan, R.; Lancraft, R.; Zacharias, G.

    1980-01-01

    A model for analyzing crew procedures in approach to landing is developed. The model employs the information processing structure used in the optimal control model and in recent models for monitoring and failure detection. Mechanisms are added to this basic structure to model crew decision making in this multi task environment. Decisions are based on probability assessments and potential mission impact (or gain). Sub models for procedural activities are included. The model distinguishes among external visual, instrument visual, and auditory sources of information. The external visual scene perception models incorporate limitations in obtaining information. The auditory information channel contains a buffer to allow for storage in memory until that information can be processed.

  17. Seeing Cells: Teaching the Visual/Verbal Rhetoric of Biology

    ERIC Educational Resources Information Center

    Dinolfo, John; Heifferon, Barbara; Temesvari, Lesly A.

    2007-01-01

    This pilot study obtained baseline information on verbal and visual rhetorics to teach microscopy techniques to college biology majors. We presented cell images to students in cell biology and biology writing classes and then asked them to identify textual, verbal, and visual cues that support microscopy learning. Survey responses suggest that…

  18. Skylab 4 visual observations project report

    NASA Technical Reports Server (NTRS)

    Kaltenbach, J. L.; Lenoir, W. B.; Mcewen, M. C.; Weitenhagen, R. A.; Wilmarth, V. R.

    1974-01-01

    The Skylab 4 Visual Observations Project was undertaken to determine the ways in which man can contribute to future earth-orbital observational programs. The premission training consisted of 17 hours of lectures by scientists representing 16 disciplines and provided the crewmen information on observational and photographic procedures and the scientific significance of this information. During the Skylab 4 mission, more than 850 observations and 2000 photographs with the 70-millimeter Hasselblad and 35-millimeter Nikon cameras were obtained for many investigative areas. Preliminary results of the project indicate that man can obtain new and unique information to support satellite earth-survey programs because of his inherent capability to make selective observations, to integrate the information, and to record the data by describing and photographing the observational sites.

  19. Visual Attention Model Based on Statistical Properties of Neuron Responses

    PubMed Central

    Duan, Haibin; Wang, Xiaohua

    2015-01-01

    Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention. PMID:25747859

  20. Tested Demonstrations: Visualization of Buffer Action and the Acidifying Effect of Carbon Dioxide.

    ERIC Educational Resources Information Center

    Gilbert, George L., Ed.

    1985-01-01

    Presents a buffer demonstration which features visualization of the effects of carbon dioxide on pH. Background information, list of materials needed, procedures used, and a discussion of results obtained are included. (JN)

  1. Do Chinese Readers Obtain Preview Benefit from Word "n" + 2? Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Yang, Jinmian; Wang, Suiping; Xu, Yimin; Rayner, Keith

    2009-01-01

    The boundary paradigm (K. Rayner, 1975) was used to determine the extent to which Chinese readers obtain information from the right of fixation during reading. As characters are the basic visual unit in written Chinese, they were used as targets in Experiment 1 to examine whether readers obtain preview information from character "n" + 1 and…

  2. A new visual identity for the National Health Service.

    PubMed

    England, P

    2000-03-01

    The following article gives a brief overview of the new visual identity being adopted by the National Health Service in England. It looks at the thinking behind the identity, the identity's component parts and provides sources for obtaining further information on the identity's application. It is compiled from a presentation by Stephanie Hood from the corporate identity team of the NHS Executive communications unit given on 22nd October 1999 at the National Designers in Health Network seminar, Time-out '99, Sheffield. Supporting information was obtained from the NHS Communications website http:¿nww.doh.nhsweb.uk/commsnet.

  3. Information Processing at 1 Year: Relation to Birth Status and Developmental Outcome during the First 5 Years.

    ERIC Educational Resources Information Center

    Rose, Susan A.; And Others

    1991-01-01

    Measures of visual and tactual recognition memory, tactual-visual transfer, and object permanence were obtained for preterm and full-term infants. Measures of tactual-visual transfer were correlated with later intelligence measures up to the age of five years. These correlations were independent of socioeconomic status, medical risk, and early…

  4. Unethical randomised controlled trial of cervical screening in India: US Freedom of Information Act disclosures.

    PubMed

    Suba, Eric J; Ortega, Robert E; Mutch, David G

    2017-01-01

    A randomised controlled trial conducted in Mumbai, India, compared invasive cervical cancer rates among women offered cervical screening with invasive cervical cancer rates among women offered no-screening. The US Office for Human Research Protections determined the Mumbai trial was unethical because informed consent was not obtained from trial participants. Reportedly, cervical screening in the Mumbai trial reduced invasive cervical cancer mortality rates, but not invasive cervical cancer incidence rates. Documents obtained through the US Freedom of Information Act disclose that the US National Cancer Institute funded the Mumbai trial from 1997 to 2015 to study 'visual inspection/downstaging' tests. However, 'visual inspection/downstaging' tests had been judged unsatisfactory for cancer control before the Mumbai trial began. 'Visual inspection/downstaging' tests failed to reduce invasive cervical cancer incidence rates in Mumbai because 'visual inspection/downstaging' tests, by design, failed to detect preinvasive cervical lesions. None of the 151 538 Mumbai trial participants, in either the intervention or control arms, received cervical screening tests that detected preinvasive cervical lesions. Because of missing/discrepant clinical staging data, it is uncertain whether 'visual inspection/downstaging' tests actually reduced invasive cervical cancer mortality rates in Mumbai. Documents obtained through the US Freedom of Information Act disclose that US National Cancer Institute leaders avoided accountability by making false and misleading statements to Congressional oversight staff. Our findings contradict assurances given to President Barack Obama that regulations pertaining to global health research supported by the US government adequately protect human participants from unethical treatment. US National Cancer Institute leaders should develop policies to compensate victims of unethical global health research. All surviving Mumbai trial participants should finally receive cervical screening tests that detect preinvasive cervical lesions.

  5. Neurolinguistic Programming Examined: Imagery, Sensory Mode, and Communication.

    ERIC Educational Resources Information Center

    Fromme, Donald K.; Daniell, Jennifer

    1984-01-01

    Tested Neurolinguistic Programming (NLP) assumptions by examining intercorrelations among response times of students (N=64) for extracting visual, auditory, and kinesthetic information from alphabetic images. Large positive intercorrelations were obtained, the only outcome not compatible with NLP. Good visualizers were significantly better in…

  6. Implications of Sustained and Transient Channels for Theories of Visual Pattern Masking, Saccadic Suppression, and Information Processing

    ERIC Educational Resources Information Center

    Breitmeyer, Bruno G.; Ganz, Leo

    1976-01-01

    This paper reviewed briefly the major types of masking effects obtained with various methods and the major theories or models that have been proposed to account for these effects, and outlined a three-mechanism model of visual pattern masking based on psychophysical and neurophysiological properties of the visual system. (Author/RK)

  7. Vivaldi: visualization and validation of biomacromolecular NMR structures from the PDB.

    PubMed

    Hendrickx, Pieter M S; Gutmanas, Aleksandras; Kleywegt, Gerard J

    2013-04-01

    We describe Vivaldi (VIsualization and VALidation DIsplay; http://pdbe.org/vivaldi), a web-based service for the analysis, visualization, and validation of NMR structures in the Protein Data Bank (PDB). Vivaldi provides access to model coordinates and several types of experimental NMR data using interactive visualization tools, augmented with structural annotations and model-validation information. The service presents information about the modeled NMR ensemble, validation of experimental chemical shifts, residual dipolar couplings, distance and dihedral angle constraints, as well as validation scores based on empirical knowledge and databases. Vivaldi was designed for both expert NMR spectroscopists and casual non-expert users who wish to obtain a better grasp of the information content and quality of NMR structures in the public archive. Copyright © 2013 Wiley Periodicals, Inc.

  8. Video quality assessment using a statistical model of human visual speed perception.

    PubMed

    Wang, Zhou; Li, Qiang

    2007-12-01

    Motion is one of the most important types of information contained in natural video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to incorporate a recent model of human visual speed perception [Nat. Neurosci. 9, 578 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.

  9. 22 CFR 61.9 - General information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false General information. 61.9 Section 61.9 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.9 General information. General information and application forms may be obtained by writing to the...

  10. 22 CFR 61.9 - General information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false General information. 61.9 Section 61.9 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.9 General information. General information and application forms may be obtained by writing to the...

  11. 22 CFR 61.9 - General information.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false General information. 61.9 Section 61.9 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.9 General information. General information and application forms may be obtained by writing to the...

  12. 22 CFR 61.9 - General information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false General information. 61.9 Section 61.9 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.9 General information. General information and application forms may be obtained by writing to the...

  13. 22 CFR 61.9 - General information.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false General information. 61.9 Section 61.9 Foreign Relations DEPARTMENT OF STATE PUBLIC DIPLOMACY AND EXCHANGES WORLD-WIDE FREE FLOW OF AUDIO-VISUAL MATERIALS § 61.9 General information. General information and application forms may be obtained by writing to the...

  14. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments †

    PubMed Central

    Guerra, Edmundo

    2018-01-01

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation. PMID:29701722

  15. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments.

    PubMed

    Trujillo, Juan-Carlos; Munguia, Rodrigo; Guerra, Edmundo; Grau, Antoni

    2018-04-26

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.

  16. Navigation system for a mobile robot with a visual sensor using a fish-eye lens

    NASA Astrophysics Data System (ADS)

    Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu

    1998-02-01

    Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.

  17. Information measures for terrain visualization

    NASA Astrophysics Data System (ADS)

    Bonaventura, Xavier; Sima, Aleksandra A.; Feixas, Miquel; Buckley, Simon J.; Sbert, Mateu; Howell, John A.

    2017-02-01

    Many quantitative and qualitative studies in geoscience research are based on digital elevation models (DEMs) and 3D surfaces to aid understanding of natural and anthropogenically-influenced topography. As well as their quantitative uses, the visual representation of DEMs can add valuable information for identifying and interpreting topographic features. However, choice of viewpoints and rendering styles may not always be intuitive, especially when terrain data are augmented with digital image texture. In this paper, an information-theoretic framework for object understanding is applied to terrain visualization and terrain view selection. From a visibility channel between a set of viewpoints and the component polygons of a 3D terrain model, we obtain three polygonal information measures. These measures are used to visualize the information associated with each polygon of the terrain model. In order to enhance the perception of the terrain's shape, we explore the effect of combining the calculated information measures with the supplementary digital image texture. From polygonal information, we also introduce a method to select a set of representative views of the terrain model. Finally, we evaluate the behaviour of the proposed techniques using example datasets. A publicly available framework for both the visualization and the view selection of a terrain has been created in order to provide the possibility to analyse any terrain model.

  18. Supervised guiding long-short term memory for image caption generation based on object classes

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan

    2018-03-01

    The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.

  19. Different source image fusion based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Piao, Yan

    2016-03-01

    The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.

  20. Consequences of cognitive impairments following traumatic brain injury: Pilot study on visual exploration while driving.

    PubMed

    Milleville-Pennel, Isabelle; Pothier, Johanna; Hoc, Jean-Michel; Mathé, Jean-François

    2010-01-01

    The aim was to assess the visual exploration of a person suffering from traumatic brain injury (TBI). It was hypothesized that visual exploration could be modified as a result of attentional or executive function deficits that are often observed following brain injury. This study compared an analysis of eyes movements while driving with data from neuropsychological tests. Five participants suffering from TBI and six control participants took part in this study. All had good driving experience. They were invited to drive on a fixed-base driving simulator. Eye fixations were recorded using an eye tracker. Neuropsychological tests were used to assess attention, working memory, rapidity of information processing and executive functions. Participants with TBI showed a reduction in the variety of the visual zones explored and a reduction of the distance of exploration. Moreover, neuropsychological evaluation indicates that there were difficulties in terms of divided attention, anticipation and planning. There is a complementarity of the information obtained. Tests give information about cognitive deficiencies but not about their translation into a dynamic situation. Conversely, visual exploration provides information about the dynamic with which information is picked up in the environment but not about the cognitive processes involved.

  1. A Method to Quantify Visual Information Processing in Children Using Eye Tracking

    PubMed Central

    Kooiker, Marlou J.G.; Pel, Johan J.M.; van der Steen-Kant, Sanny P.; van der Steen, Johannes

    2016-01-01

    Visual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children under the age of 4 years and in children with intellectual disabilities. Here we describe a quantitative method that overcomes these problems. The method uses a remote eye tracker and a four choice preferential looking paradigm to measure eye movement responses to different visual stimuli. The child sits without head support in front of a monitor with integrated infrared cameras. In one of four monitor quadrants a visual stimulus is presented. Each stimulus has a specific visual modality with respect to the background, e.g., form, motion, contrast or color. From the reflexive eye movement responses to these specific visual modalities, output parameters such as reaction times, fixation accuracy and fixation duration are calculated to quantify a child's viewing behavior. With this approach, the quality of visual information processing can be assessed without the use of communication. By comparing results with reference values obtained in typically developing children from 0-12 years, the method provides a characterization of visual information processing in visually impaired children. The quantitative information provided by this method can be advantageous for the field of clinical visual assessment and rehabilitation in multiple ways. The parameter values provide a good basis to: (i) characterize early visual capacities and consequently to enable early interventions; (ii) compare risk groups and follow visual development over time; and (iii), construct an individual visual profile for each child. PMID:27500922

  2. A Method to Quantify Visual Information Processing in Children Using Eye Tracking.

    PubMed

    Kooiker, Marlou J G; Pel, Johan J M; van der Steen-Kant, Sanny P; van der Steen, Johannes

    2016-07-09

    Visual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children under the age of 4 years and in children with intellectual disabilities. Here we describe a quantitative method that overcomes these problems. The method uses a remote eye tracker and a four choice preferential looking paradigm to measure eye movement responses to different visual stimuli. The child sits without head support in front of a monitor with integrated infrared cameras. In one of four monitor quadrants a visual stimulus is presented. Each stimulus has a specific visual modality with respect to the background, e.g., form, motion, contrast or color. From the reflexive eye movement responses to these specific visual modalities, output parameters such as reaction times, fixation accuracy and fixation duration are calculated to quantify a child's viewing behavior. With this approach, the quality of visual information processing can be assessed without the use of communication. By comparing results with reference values obtained in typically developing children from 0-12 years, the method provides a characterization of visual information processing in visually impaired children. The quantitative information provided by this method can be advantageous for the field of clinical visual assessment and rehabilitation in multiple ways. The parameter values provide a good basis to: (i) characterize early visual capacities and consequently to enable early interventions; (ii) compare risk groups and follow visual development over time; and (iii), construct an individual visual profile for each child.

  3. The use of interactive technology in the classroom.

    PubMed

    Kresic, P

    1999-01-01

    This article discusses the benefits that clinical laboratory science students and instructors experienced through the use of and integration of computer technology, microscopes, and digitizing cameras. Patient specimens were obtained from the participating clinical affiliates, slides stained or wet mounts prepared, images viewed under the microscope, digitized, and after labeling, stored into an appropriate folder. The individual folders were labeled as Hematology, Microbiology, Chemistry, or Urinalysis. Students, after obtaining the necessary specimens and pertinent data, created case study presentations for class discussions. After two semesters of utilizing videomicroscopy/computer technology in the classroom, students and instructors realized the potential associated with the technology, namely, the vast increase in the amount of organized visual and scientific information accessible and the availability of collaborative and interactive learning to complement individualized instruction. The instructors, on the other hand, were able to provide a wider variety of visual information on individual bases. In conclusion, the appropriate use of technology can enhance students' learning and participation. Increased student involvement through the use of videomicroscopy and computer technology heightened their sense of pride and ownership in providing suitable information in case study presentations. Also, visualization provides students and educators with alternative methods of teaching/learning and increased retention of information.

  4. Learning and Prediction of Slip from Visual Information

    NASA Technical Reports Server (NTRS)

    Angelova, Anelia; Matthies, Larry; Helmick, Daniel; Perona, Pietro

    2007-01-01

    This paper presents an approach for slip prediction from a distance for wheeled ground robots using visual information as input. Large amounts of slippage which can occur on certain surfaces, such as sandy slopes, will negatively affect rover mobility. Therefore, obtaining information about slip before entering such terrain can be very useful for better planning and avoiding these areas. To address this problem, terrain appearance and geometry information about map cells are correlated to the slip measured by the rover while traversing each cell. This relationship is learned from previous experience, so slip can be predicted remotely from visual information only. The proposed method consists of terrain type recognition and nonlinear regression modeling. The method has been implemented and tested offline on several off-road terrains including: soil, sand, gravel, and woodchips. The final slip prediction error is about 20%. The system is intended for improved navigation on steep slopes and rough terrain for Mars rovers.

  5. Objective Measures of Visual Function in Papilledema

    PubMed Central

    Moss, Heather E.

    2016-01-01

    Synopsis Visual function is an important parameter to consider when managing patients with papilledema. Though the current standard of care uses standard automated perimetry (SAP) to obtain this information, this test is inherently subjective and prone to patient errors. Objective visual function tests including the visual evoked potential, pattern electroretinogram, photopic negative response of the full field electroretinogram, and pupillary light response have the potential to replace or supplement subjective visual function tests in papilledema management. This article reviews the evidence for use of objective visual function tests to assess visual function in papilledema and discusses future investigations needed to develop them as clinically practical and useful measures for this purpose. PMID:28451649

  6. Force sensor attachable to thin fiberscopes/endoscopes utilizing high elasticity fabric.

    PubMed

    Watanabe, Tetsuyou; Iwai, Takanobu; Fujihira, Yoshinori; Wakako, Lina; Kagawa, Hiroyuki; Yoneyama, Takeshi

    2014-03-12

    An endoscope/fiberscope is a minimally invasive tool used for directly observing tissues in areas deep inside the human body where access is limited. However, this tool only yields visual information. If force feedback information were also available, endoscope/fiberscope operators would be able to detect indurated areas that are visually hard to recognize. Furthermore, obtaining such feedback information from tissues in areas where collecting visual information is a challenge would be highly useful. The major obstacle is that such force information is difficult to acquire. This paper presents a novel force sensing system that can be attached to a very thin fiberscope/endoscope. To ensure a small size, high resolution, easy sterilization, and low cost, the proposed force visualization-based system uses a highly elastic material-panty stocking fabric. The paper also presents the methodology for deriving the force value from the captured image. The system has a resolution of less than 0.01 N and sensitivity of greater than 600 pixels/N within the force range of 0-0.2 N.

  7. Development of image processing techniques for applications in flow visualization and analysis

    NASA Technical Reports Server (NTRS)

    Disimile, Peter J.; Shoe, Bridget; Toy, Norman; Savory, Eric; Tahouri, Bahman

    1991-01-01

    A comparison between two flow visualization studies of an axi-symmetric circular jet issuing into still fluid, using two different experimental techniques, is described. In the first case laser induced fluorescence is used to visualize the flow structure, whilst smoke is utilized in the second. Quantitative information was obtained from these visualized flow regimes using two different digital imaging systems. Results are presented of the rate at which the jet expands in the downstream direction and these compare favorably with the more established data.

  8. The four-meter confrontation visual field test.

    PubMed Central

    Kodsi, S R; Younge, B R

    1992-01-01

    The 4-m confrontation visual field test has been successfully used at the Mayo Clinic for many years in addition to the standard 0.5-m confrontation visual field test. The 4-m confrontation visual field test is a test of macular function and can identify small central or paracentral scotomas that the examiner may not find when the patient is tested only at 0.5 m. Also, macular sparing in homonymous hemianopias and quadrantanopias may be identified with the 4-m confrontation visual field test. We recommend use of this confrontation visual field test, in addition to the standard 0.5-m confrontation visual field test, on appropriately selected patients to obtain the most information possible by confrontation visual field tests. PMID:1494829

  9. The four-meter confrontation visual field test.

    PubMed

    Kodsi, S R; Younge, B R

    1992-01-01

    The 4-m confrontation visual field test has been successfully used at the Mayo Clinic for many years in addition to the standard 0.5-m confrontation visual field test. The 4-m confrontation visual field test is a test of macular function and can identify small central or paracentral scotomas that the examiner may not find when the patient is tested only at 0.5 m. Also, macular sparing in homonymous hemianopias and quadrantanopias may be identified with the 4-m confrontation visual field test. We recommend use of this confrontation visual field test, in addition to the standard 0.5-m confrontation visual field test, on appropriately selected patients to obtain the most information possible by confrontation visual field tests.

  10. Problem solving of student with visual impairment related to mathematical literacy problem

    NASA Astrophysics Data System (ADS)

    Pratama, A. R.; Saputro, D. R. S.; Riyadi

    2018-04-01

    The student with visual impairment, total blind category depends on the sense of touch and hearing in obtaining information. In fact, the two senses can receive information less than 20%. Thus, students with visual impairment of the total blind categories in the learning process must have difficulty, including learning mathematics. This study aims to describe the problem-solving process of the student with visual impairment, total blind category on mathematical literacy issues based on Polya phase. This research using test method similar problems mathematical literacy in PISA and in-depth interviews. The subject of this study was a student with visual impairment, total blind category. Based on the result of the research, problem-solving related to mathematical literacy based on Polya phase is quite good. In the phase of understanding the problem, the student read about twice by brushing the text and assisted with information through hearing three times. The student with visual impairment in problem-solving based on the Polya phase, devising a plan by summoning knowledge and experience gained previously. At the phase of carrying out the plan, students with visual impairment implement the plan in accordance with pre-made. In the looking back phase, students with visual impairment need to check the answers three times but have not been able to find a way.

  11. Perceptual Span in Oral Reading: The Case of Chinese

    ERIC Educational Resources Information Center

    Pan, Jinger; Yan, Ming; Laubrock, Jochen

    2017-01-01

    The present study explores the perceptual span, that is, the physical extent of the area from which useful visual information is obtained during a single fixation, during oral reading of Chinese sentences. Characters outside a window of legible text were replaced by visually similar characters. Results show that the influence of window size on the…

  12. Functional Vision Observation. Technical Assistance Paper.

    ERIC Educational Resources Information Center

    Florida State Dept. of Education, Tallahassee. Bureau of Education for Exceptional Students.

    Technical assistance is provided concerning documentation of functional vision loss for Florida students with visual impairments. The functional vision observation should obtain enough information for determination of special service eligibility. The observation is designed to supplement information on the medical eye examination, and is conducted…

  13. Acquisition and Visualization Techniques of Human Motion Using Master-Slave System and Haptograph

    NASA Astrophysics Data System (ADS)

    Katsura, Seiichiro; Ohishi, Kiyoshi

    Artificial acquisition and reproduction of human sensations are basic technologies of communication engineering. For example, auditory information is obtained by a microphone, and a speaker reproduces it by artificial means. Furthermore, a video camera and a television make it possible to transmit visual sensation by broadcasting. On the contrary, since tactile or haptic information is subject to the Newton's “law of action and reaction” in the real world, a device which acquires, transmits, and reproduces the information has not been established. From the point of view, real-world haptics is the key technology for future haptic communication engineering. This paper proposes a novel acquisition method of haptic information named “haptograph”. The haptograph visualizes the haptic information like photograph. Since temporal and spatial analyses are conducted to represent haptic information as the haptograph, it is possible to be recognized and to be evaluated intuitively. In this paper, the proposed haptograph is applied to visualization of human motion. It is possible to represent the motion characteristics, the expert's skill and the personal habit, and so on. In other words, a personal encyclopedia is attained. Once such a personal encyclopedia is stored in ubiquitous environment, the future human support technology will be developed.

  14. Grammatical Gender and Mental Representation of Object: The Case of Musical Instruments.

    PubMed

    Vuksanović, Jasmina; Bjekić, Jovana; Radivojević, Natalija

    2015-08-01

    A body of research shows that grammatical gender, although an arbitrary category, is viewed as the system with its own meaning. However, the question remains to what extent does grammatical gender influence shaping our notions about objects when both verbal and visual information are available. Two experiments were conducted. The results obtained in Experiment 1 have shown that grammatical gender as a linguistic property of the pseudo-nouns used as names for musical instruments significantly affects people's representations about these instruments. The purpose of Experiment 2 was to examine how the representation of musical instruments will be shaped in the presence of both language and visual information. The results indicate that the co-existence of linguistic and visual information results in formation of concepts about selected instruments by all available information from both sources, thus suggesting that grammatical gender influences nonverbal concepts' forming, but has no privileged status in the matter.

  15. Automated Identification and Characterization of Secondary & Tertiary gamma’ Precipitates in Nickel-Based Superalloys (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    and intensity information from the EFTEM images. The microstructural statistics obtained from the segmented γ’ precipitates agreed with those of the...is its ability to automate segmentation of precipitates in a reproducible manner for acquiring microstructural statistics that relate to both...were identified using a combination of visual inspection and intensity information from the EFTEM images. The microstructural statistics obtained

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alekseev, I. S.; Ivanov, I. E.; Strelkov, P. S., E-mail: strelkov@fpl.gpi.ru

    A method based on the detection of emission of a dielectric screen with metal microinclusions in open air is applied to visualize the transverse structure of a high-power microwave beam. In contrast to other visualization techniques, the results obtained in this work provide qualitative information not only on the electric field strength, but also on the structure of electric field lines in the microwave beam cross section. The interpretation of the results obtained with this method is confirmed by numerical simulations of the structure of electric field lines in the microwave beam cross section by means of the CARAT code.

  17. Content-based Music Search and Recommendation System

    NASA Astrophysics Data System (ADS)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  18. Visual classification of medical data using MLP mapping.

    PubMed

    Cağatay Güler, E; Sankur, B; Kahya, Y P; Raudys, S

    1998-05-01

    In this work we discuss the design of a novel non-linear mapping method for visual classification based on multilayer perceptrons (MLP) and assigned class target values. In training the perceptron, one or more target output values for each class in a 2-dimensional space are used. In other words, class membership information is interpreted visually as closeness to target values in a 2D feature space. This mapping is obtained by training the multilayer perceptron (MLP) using class membership information, input data and judiciously chosen target values. Weights are estimated in such a way that each training feature of the corresponding class is forced to be mapped onto the corresponding 2-dimensional target value.

  19. Recognition of visual stimuli and memory for spatial context in schizophrenic patients and healthy volunteers.

    PubMed

    Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh

    2004-11-01

    Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.

  20. Fusion of infrared and visible images based on saliency scale-space in frequency domain

    NASA Astrophysics Data System (ADS)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2015-12-01

    A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.

  1. NEON VISUALIZATION ENVIRONMENT

    DTIC Science & Technology

    2017-07-28

    STINFO COPY AIR FORCE RESEARCH LABORATORY INFORMATION DIRECTORATE AFRL-RI-RS-TR-2017-143  UNITED STATES AIR FORCE  ROME, NY 13441 AIR FORCE...report is available to the general public, including foreign nationals. Copies may be obtained from the Defense Technical Information Center (DTIC) (http...FOR THE CHIEF ENGINEER: / S / / S / PETER A. JEDRYSIK JULIE BRICHACEK Work Unit Manager Chief, Information Systems Division Information

  2. MRIVIEW: An interactive computational tool for investigation of brain structure and function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranken, D.; George, J.

    MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.

  3. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    NASA Astrophysics Data System (ADS)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  4. Tools for Visualizing HIV in Cure Research.

    PubMed

    Niessl, Julia; Baxter, Amy E; Kaufmann, Daniel E

    2018-02-01

    The long-lived HIV reservoir remains a major obstacle for an HIV cure. Current techniques to analyze this reservoir are generally population-based. We highlight recent developments in methods visualizing HIV, which offer a different, complementary view, and provide indispensable information for cure strategy development. Recent advances in fluorescence in situ hybridization techniques enabled key developments in reservoir visualization. Flow cytometric detection of HIV mRNAs, concurrently with proteins, provides a high-throughput approach to study the reservoir on a single-cell level. On a tissue level, key spatial information can be obtained detecting viral RNA and DNA in situ by fluorescence microscopy. At total-body level, advancements in non-invasive immuno-positron emission tomography (PET) detection of HIV proteins may allow an encompassing view of HIV reservoir sites. HIV imaging approaches provide important, complementary information regarding the size, phenotype, and localization of the HIV reservoir. Visualizing the reservoir may contribute to the design, assessment, and monitoring of HIV cure strategies in vitro and in vivo.

  5. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  6. The role of vision in auditory distance perception.

    PubMed

    Calcagno, Esteban R; Abregú, Ezequiel L; Eguía, Manuel C; Vergara, Ramiro

    2012-01-01

    In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.

  7. A study of the laminar separation bubble on an airfoil at low Reynolds numbers using flow visualization techniques

    NASA Technical Reports Server (NTRS)

    Schmidt, Gordon S.; Mueller, Thomas J.

    1987-01-01

    The use of flow visualization to study separation bubbles is evaluated. The wind tunnel, two NACA 66(3)-018 airfoil models, and kerosene vapor, titanium tetrachloride, and surface flow visualizations techniques are described. The application of the three visualization techniques to the two airfoil models reveals that the smoke and vapor techniques provide data on the location of laminar separation and the onset of transition, and the surface method produces information about the location of turbulent boundary layer separation. The data obtained with the three flow visualization techniques are compared to pressure distribution data and good correlation is detected. It is noted that flow visualization is an effective technique for examining separation bubbles.

  8. Identifying solutions to medication adherence in the visually impaired elderly.

    PubMed

    Smith, Miranda; Bailey, Trista

    2014-02-01

    Adults older than 65 years of age with vision impairment are more likely to have difficulty managing medications compared with people having normal vision. This patient population has difficulty reading medication information and may take the wrong medication or incorrect doses of medication, resulting in serious consequences, including overdose or inadequate treatment of health problems. Visually impaired patients report increased anxiety related to medication management and must rely on others to obtain necessary drug information. Pharmacists have a unique opportunity to pursue accurate medication adherence in this special population. This article reviews literature illustrating how severe medication mismanagement can occur in the visually impaired elderly and presents resources and solutions for pharmacists to take a larger role in adherence management in this population.

  9. Detecting delay in visual feedback of an action as a monitor of self recognition.

    PubMed

    Hoover, Adria E N; Harris, Laurence R

    2012-10-01

    How do we distinguish "self" from "other"? The correlation between willing an action and seeing it occur is an important cue. We exploited the fact that this correlation needs to occur within a restricted temporal window in order to obtain a quantitative assessment of when a body part is identified as "self". We measured the threshold and sensitivity (d') for detecting a delay between movements of the finger (of both the dominant and non-dominant hands) and visual feedback as seen from four visual perspectives (the natural view, and mirror-reversed and/or inverted views). Each trial consisted of one presentation with minimum delay and another with a delay of between 33 and 150 ms. Participants indicated which presentation contained the delayed view. We varied the amount of efference copy available for this task by comparing performances for discrete movements and continuous movements. Discrete movements are associated with a stronger efference copy. Sensitivity to detect asynchrony between visual and proprioceptive information was significantly higher when movements were viewed from a "plausible" self perspective compared with when the view was reversed or inverted. Further, we found differences in performance between dominant and non-dominant hand finger movements across the continuous and single movements. Performance varied with the viewpoint from which the visual feedback was presented and on the efferent component such that optimal performance was obtained when the presentation was in the normal natural orientation and clear efferent information was available. Variations in sensitivity to visual/non-visual temporal incongruence with the viewpoint in which a movement is seen may help determine the arrangement of the underlying visual representation of the body.

  10. Visualizing common operating picture of critical infrastructure

    NASA Astrophysics Data System (ADS)

    Rummukainen, Lauri; Oksama, Lauri; Timonen, Jussi; Vankka, Jouko

    2014-05-01

    This paper presents a solution for visualizing the common operating picture (COP) of the critical infrastructure (CI). The purpose is to improve the situational awareness (SA) of the strategic-level actor and the source system operator in order to support decision making. The information is obtained through the Situational Awareness of Critical Infrastructure and Networks (SACIN) framework. The system consists of an agent-based solution for gathering, storing, and analyzing the information, and a user interface (UI) is presented in this paper. The UI consists of multiple views visualizing information from the CI in different ways. Different CI actors are categorized in 11 separate sectors, and events are used to present meaningful incidents. Past and current states, together with geographical distribution and logical dependencies, are presented to the user. The current states are visualized as segmented circles to represent event categories. Geographical distribution of assets is displayed with a well-known map tool. Logical dependencies are presented in a simple directed graph, and users also have a timeline to review past events. The objective of the UI is to provide an easily understandable overview of the CI status. Therefore, testing methods, such as a walkthrough, an informal walkthrough, and the Situation Awareness Global Assessment Technique (SAGAT), were used in the evaluation of the UI. Results showed that users were able to obtain an understanding of the current state of CI, and the usability of the UI was rated as good. In particular, the designated display for the CI overview and the timeline were found to be efficient.

  11. Multi-scale image segmentation method with visual saliency constraints and its application

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.

  12. On the effects of multimodal information integration in multitasking.

    PubMed

    Stock, Ann-Kathrin; Gohil, Krutika; Huster, René J; Beste, Christian

    2017-07-07

    There have recently been considerable advances in our understanding of the neuronal mechanisms underlying multitasking, but the role of multimodal integration for this faculty has remained rather unclear. We examined this issue by comparing different modality combinations in a multitasking (stop-change) paradigm. In-depth neurophysiological analyses of event-related potentials (ERPs) were conducted to complement the obtained behavioral data. Specifically, we applied signal decomposition using second order blind identification (SOBI) to the multi-subject ERP data and source localization. We found that both general multimodal information integration and modality-specific aspects (potentially related to task difficulty) modulate behavioral performance and associated neurophysiological correlates. Simultaneous multimodal input generally increased early attentional processing of visual stimuli (i.e. P1 and N1 amplitudes) as well as measures of cognitive effort and conflict (i.e. central P3 amplitudes). Yet, tactile-visual input caused larger impairments in multitasking than audio-visual input. General aspects of multimodal information integration modulated the activity in the premotor cortex (BA 6) as well as different visual association areas concerned with the integration of visual information with input from other modalities (BA 19, BA 21, BA 37). On top of this, differences in the specific combination of modalities also affected performance and measures of conflict/effort originating in prefrontal regions (BA 6).

  13. Kinetic analysis of downward step posture according to the foothold heights and visual information blockage in cargo truck

    PubMed Central

    Hyun, Seung-Hyun; Ryew, Che-Cheong

    2018-01-01

    The study was undertaken to compare and analyze kinetic variables during downward foot-on according to the foothold heights under interrupted-visual information on 25-t cargo truck. Skilled adult male drivers (n=10) engaged in cargo truck driving over 1 year participated in the experiment. The results obtained from cinematographic and ground reaction force data during downward foot-on as follows; First, leg stiffness, peak vertical force (PVF) and loading rate showed significant difference as an increase of foothold heights, that is, interrupted-visual information showed greater impulse force than as was not. Second, variables of center of pressure (COP) with interrupted-visual information did not showed difference, but anterior-posterior COP and COP area showed an increasing tendency as an increase of foothold heights. Third, dynamic posture stability index (overall, medial-lateral, anterior-posterior, and vertical) showed significant difference as an increase of foothold height, that is, interrupted-visual information showed lower index than as was not. Therefore it will be possible to control successfully the leg stiffness, loading rate, and PVF when preparing an estimate for air phase time and impulse force through habitual cognition and confirmation at landing during downward foot-on from cargo truck. Identifying these potential differences may enable clinicians to assess type of injury and design exercise rehabilitation protocols specific. PMID:29740569

  14. Realistic tissue visualization using photoacoustic image

    NASA Astrophysics Data System (ADS)

    Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong

    2018-02-01

    Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.

  15. Design and application of pulse information acquisition and analysis system with dynamic recognition in traditional Chinese medicine.

    PubMed

    Zhang, Jian; Niu, Xin; Yang, Xue-zhi; Zhu, Qing-wen; Li, Hai-yan; Wang, Xuan; Zhang, Zhi-guo; Sha, Hong

    2014-09-01

    To design the pulse information which includes the parameter of pulse-position, pulse-number, pulse-shape and pulse-force acquisition and analysis system with function of dynamic recognition, and research the digitalization and visualization of some common cardiovascular mechanism of single pulse. To use some flexible sensors to catch the radial artery pressure pulse wave and utilize the high frequency B mode ultrasound scanning technology to synchronously obtain the information of radial extension and axial movement, by the way of dynamic images, then the gathered information was analyzed and processed together with ECG. Finally, the pulse information acquisition and analysis system was established which has the features of visualization and dynamic recognition, and it was applied to serve for ten healthy adults. The new system overcome the disadvantage of one-dimensional pulse information acquisition and process method which was common used in current research area of pulse diagnosis in traditional Chinese Medicine, initiated a new way of pulse diagnosis which has the new features of dynamic recognition, two-dimensional information acquisition, multiplex signals combination and deep data mining. The newly developed system could translate the pulse signals into digital, visual and measurable motion information of vessel.

  16. The role of shared visual information for joint action coordination.

    PubMed

    Vesper, Cordula; Schmitz, Laura; Safra, Lou; Sebanz, Natalie; Knoblich, Günther

    2016-08-01

    Previous research has identified a number of coordination processes that enable people to perform joint actions. But what determines which coordination processes joint action partners rely on in a given situation? The present study tested whether varying the shared visual information available to co-actors can trigger a shift in coordination processes. Pairs of participants performed a movement task that required them to synchronously arrive at a target from separate starting locations. When participants in a pair received only auditory feedback about the time their partner reached the target they held their movement duration constant to facilitate coordination. When they received additional visual information about each other's movements they switched to a fundamentally different coordination process, exaggerating the curvature of their movements to communicate their arrival time. These findings indicate that the availability of shared perceptual information is a major factor in determining how individuals coordinate their actions to obtain joint outcomes. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.

    PubMed

    Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos

    2018-03-25

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.

  18. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    PubMed Central

    2018-01-01

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392

  19. Combining spectral material properties in the infrared and the visible spectral range for qualification and nondestructive evaluation of components

    NASA Astrophysics Data System (ADS)

    Eisler, K.; Goldammer, M.; Rothenfusser, M.; Arnold, W.; Homma, C.

    2012-05-01

    The spectral selective thermography with infrared filters can be used to determine or to distinguish materials such as contaminations on a metallic component. With additional visual information, the indications by the IR signal can be selectively accentuated or suppressed for easier evaluation of passive and active thermography measurements. For flash thermography the detected IR signal between 3.4 and 5.1 μm is analyzed with regard to the spectral material information. The presented hybrid camera uses beam overlapping to obtain combined images of both in the infrared and the visual range.

  20. Raman Microscopy: A Noninvasive Method to Visualize the Localizations of Biomolecules in the Cornea.

    PubMed

    Kaji, Yuichi; Akiyama, Toshihiro; Segawa, Hiroki; Oshika, Tetsuro; Kano, Hideaki

    2017-11-01

    In vivo and in situ visualization of biomolecules without pretreatment will be important for diagnosis and treatment of ocular disorders in the future. Recently, multiphoton microscopy, based on the nonlinear interactions between molecules and photons, has been applied to reveal the localizations of various molecules in tissues. We aimed to use multimodal multiphoton microscopy to visualize the localizations of specific biomolecules in rat corneas. Multiphoton images of the corneas were obtained from nonlinear signals of coherent anti-Stokes Raman scattering, third-order sum frequency generation, and second-harmonic generation. The localizations of the adhesion complex-containing basement membrane and Bowman layer were clearly visible in the third-order sum frequency generation images. The fine structure of type I collagen was observed in the corneal stroma in the second-harmonic generation images. The localizations of lipids, proteins, and nucleic acids (DNA/RNA) was obtained in the coherent anti-Stokes Raman scattering images. Imaging technologies have progressed significantly and been applied in medical fields. Optical coherence tomography and confocal microscopy are widely used but do not provide information on the molecular structure of the cornea. By contrast, multiphoton microscopy provides information on the molecular structure of living tissues. Using this technique, we successfully visualized the localizations of various biomolecules including lipids, proteins, and nucleic acids in the cornea. We speculate that multiphoton microscopy will provide essential information on the physiological and pathological conditions of the cornea, as well as molecular localizations in tissues without pretreatment.

  1. From Seeing to Saying: Perceiving, Planning, Producing

    ERIC Educational Resources Information Center

    Kuchinsky, Stefanie Ellen

    2009-01-01

    Given the amount of visual information in a scene, how do speakers determine what to talk about first? One hypothesis is that speakers start talking about what has attentional priority, while another is that speakers first extract the scene gist, using the obtained relational information to generate a rudimentary sentence plan before retrieving…

  2. The Prediction, from Infancy, of Adult IQ and Achievement

    ERIC Educational Resources Information Center

    Fagan, Joseph F.; Holland, Cynthia R.; Wheeler, Karyn

    2007-01-01

    Young adults, originally tested as infants for their ability to process information as measured by selective attention to novelty (an operational definition of visual recognition memory), were revisited. A current estimate of IQ was obtained as well as a measure of academic achievement. Information processing ability at 6-12 months was predictive…

  3. Efficient visual object and word recognition relies on high spatial frequency coding in the left posterior fusiform gyrus: evidence from a case-series of patients with ventral occipito-temporal cortex damage.

    PubMed

    Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A

    2013-11-01

    Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.

  4. The Characteristics and Limits of Rapid Visual Categorization

    PubMed Central

    Fabre-Thorpe, Michèle

    2011-01-01

    Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID:22007180

  5. Use of Context in Video Processing

    NASA Astrophysics Data System (ADS)

    Wu, Chen; Aghajan, Hamid

    Interpreting an event or a scene based on visual data often requires additional contextual information. Contextual information may be obtained from different sources. In this chapter, we discuss two broad categories of contextual sources: environmental context and user-centric context. Environmental context refers to information derived from domain knowledge or from concurrently sensed effects in the area of operation. User-centric context refers to information obtained and accumulated from the user. Both types of context can include static or dynamic contextual elements. Examples from a smart home environment are presented to illustrate how different types of contextual data can be applied to aid the decision-making process.

  6. Optical hiding with visual cryptography

    NASA Astrophysics Data System (ADS)

    Shi, Yishi; Yang, Xiubo

    2017-11-01

    We propose an optical hiding method based on visual cryptography. In the hiding process, we convert the secret information into a set of fabricated phase-keys, which are completely independent of each other, intensity-detected-proof and image-covered, leading to the high security. During the extraction process, the covered phase-keys are illuminated with laser beams and then incoherently superimposed to extract the hidden information directly by human vision, without complicated optical implementations and any additional computation, resulting in the convenience of extraction. Also, the phase-keys are manufactured as the diffractive optical elements that are robust to the attacks, such as the blocking and the phase-noise. Optical experiments verify that the high security, the easy extraction and the strong robustness are all obtainable in the visual-cryptography-based optical hiding.

  7. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    PubMed Central

    2014-01-01

    Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295

  8. Gradient-based multiresolution image fusion.

    PubMed

    Petrović, Valdimir S; Xydeas, Costas S

    2004-02-01

    A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.

  9. Research for the design of visual fatigue based on the computer visual communication

    NASA Astrophysics Data System (ADS)

    Deng, Hu-Bin; Ding, Bao-min

    2013-03-01

    With the era of rapid development of computer networks. The role of network communication in the social, economic, political, become more and more important and suggested their special role. The computer network communicat ion through the modern media and byway of the visual communication effect the public of the emotional, spiritual, career and other aspects of the life. While its rapid growth also brought some problems, It is that their message across to the public, its design did not pass a relat ively perfect manifestation to express the informat ion. So this not only leads to convey the error message, but also to cause the physical and psychological fatigue for the audiences. It is said that the visual fatigue. In order to reduce the fatigue when people obtain the useful information in using computer. Let the audience in a short time to obtain the most useful informat ion, this article gave a detailed account of its causes, and propose effective solutions and, through the specific examples to explain it, also in the future computer design visual communicat ion applications development prospect.

  10. Yet More Visualized JAMSTEC Cruise and Dive Information

    NASA Astrophysics Data System (ADS)

    Tomiyama, T.; Hase, H.; Fukuda, K.; Saito, H.; Kayo, M.; Matsuda, S.; Azuma, S.

    2014-12-01

    Every year, JAMSTEC performs about a hundred of research cruises and numerous dive surveys using its research vessels and submersibles. JAMSTEC provides data and samples obtained during these cruises and dives to international users through a series of data sites on the Internet. The "DARWIN (http://www.godac.jamstec.go.jp/darwin/e)" data site disseminates cruise and dive information. On DARWIN, users can search interested cruises and dives with a combination search form or an interactive tree menu, and find lists of observation data as well as links to surrounding databases. Document catalog, physical sample databases, and visual archive of dive surveys (e. g. in http://www.godac.jamstec.go.jp/jmedia/portal/e) are directly accessible from the lists. In 2014, DARWIN experienced an update, which was arranged mainly for enabling on-demand data visualization. Using login users' functions, users can put listed data items into the virtual basket and then trim, plot and download the data. The visualization tools help users to quickly grasp the quality and characteristics of observation data. Meanwhile, JAMSTEC launched a new data site named "JDIVES (http://www.godac.jamstec.go.jp/jdives/e)" to visualize data and sample information obtained by dive surveys. JDIVES shows tracks of dive surveys on the "Google Earth Plugin" and diagrams of deep-sea environmental data such as temperature, salinity, and depth. Submersible camera images and links to associated databases are placed along the dive tracks. The JDVIES interface enables users to perform so-called virtual dive surveys, which can help users to understand local geometries of dive spots and geological settings of associated data and samples. It is not easy for individual researchers to organize a huge amount of information recovered from each cruise and dive. The improved visibility and accessibility of JAMSTEC databases are advantageous not only for second-hand users, but also for on-board researchers themselves.

  11. Forecasting and visualization of wildfires in a 3D geographical information system

    NASA Astrophysics Data System (ADS)

    Castrillón, M.; Jorge, P. A.; López, I. J.; Macías, A.; Martín, D.; Nebot, R. J.; Sabbagh, I.; Quintana, F. M.; Sánchez, J.; Sánchez, A. J.; Suárez, J. P.; Trujillo, A.

    2011-03-01

    This paper describes a wildfire forecasting application based on a 3D virtual environment and a fire simulation engine. A novel open-source framework is presented for the development of 3D graphics applications over large geographic areas, offering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connections of several users for monitoring a real wildfire event. The system is able to make a realistic composition of what is really happening in the area of the wildfire with dynamic 3D objects and location of human and material resources in real time, providing a new perspective to analyze the wildfire information. The user is enabled to simulate and visualize the propagation of a fire on the terrain integrating at the same time spatial information on topography and vegetation types with weather and wind data. The application communicates with a remote web service that is in charge of the simulation task. The user may specify several parameters through a friendly interface before the application sends the information to the remote server responsible of carrying out the wildfire forecasting using the FARSITE simulation model. During the process, the server connects to different external resources to obtain up-to-date meteorological data. The client application implements a realistic 3D visualization of the fire evolution on the landscape. A Level Of Detail (LOD) strategy contributes to improve the performance of the visualization system.

  12. Direct-location versus verbal report methods for measuring auditory distance perception in the far field.

    PubMed

    Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O

    2018-06-01

    In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.

  13. The use of contact lenses in the civil airman population.

    DOT National Transportation Integrated Search

    1990-09-01

    Federal Aviation Regulations permit the routine use of contact lenses by civilian pilots to satisfy the distant visual acuity requirements for obtaining medical certificates. Specific information identifying the prevalence of both defective distant v...

  14. Implementation of dictionary pair learning algorithm for image quality improvement

    NASA Astrophysics Data System (ADS)

    Vimala, C.; Aruna Priya, P.

    2018-04-01

    This paper proposes an image denoising on dictionary pair learning algorithm. Visual information is transmitted in the form of digital images is becoming a major method of communication in the modern age, but the image obtained after transmissions is often corrupted with noise. The received image needs processing before it can be used in applications. Image denoising involves the manipulation of the image data to produce a visually high quality image.

  15. Visualizing deep neural network by alternately image blurring and deblurring.

    PubMed

    Wang, Feng; Liu, Haijun; Cheng, Jian

    2018-01-01

    Visualization from trained deep neural networks has drawn massive public attention in recent. One of the visualization approaches is to train images maximizing the activation of specific neurons. However, directly maximizing the activation would lead to unrecognizable images, which cannot provide any meaningful information. In this paper, we introduce a simple but effective technique to constrain the optimization route of the visualization. By adding two totally inverse transformations, image blurring and deblurring, to the optimization procedure, recognizable images can be created. Our algorithm is good at extracting the details in the images, which are usually filtered by previous methods in the visualizations. Extensive experiments on AlexNet, VGGNet and GoogLeNet illustrate that we can better understand the neural networks utilizing the knowledge obtained by the visualization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. The ventriloquist in periphery: impact of eccentricity-related reliability on audio-visual localization.

    PubMed

    Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier

    2013-10-28

    The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.

  17. International Practice in Care Provision for Post-stroke Visual Impairment.

    PubMed

    Rowe, Fiona J

    2017-09-01

    This study sought to explore the practice of orthoptists internationally in care provision for poststroke visual impairment. Survey questions were developed and piloted with clinicians, academics, and users. Questions addressed types of visual problems, how these were identified, treated, and followed up, care pathways in use, links with other professions, and referral options. The survey was approved by the institutional ethical committee. The survey was accessed via a web link that was circulated through the International Orthoptic Association member professional organisations to orthoptists. Completed electronic surveys were obtained from 299 individuals. About one-third (35.5%) of orthoptists saw patients within 2 weeks of stroke onset and over half (55.5%) by 1 month post stroke. Stroke survivors were routinely assessed by 87%; over three-quarters in eye clinics. Screening tools were used by 11%. Validated tests were used for assessment of visual acuity (76.5%), visual field (68.2%), eye movement (80.9%), binocular vision (77.9%), and visual function (55.8%). Visual problems suspected by family or professionals were high (86.6%). Typical overall follow-up period of vision care was less than 3 months. Designated care pathways for stroke survivors with visual problems were used by 56.9% of orthoptists. Information on visual impairment was provided by 85.9% of orthoptists. In international orthoptic practice, there is general agreement on assessment and management of visual impairment in stroke populations. More than half of orthoptists reported seeing stroke survivors within 1 month of the stroke onset, typically in eye clinics. There was a high use of validated tests of visual acuity, visual fields, ocular motility, and binocular vision. Similarly there was high use of established treatment options including prisms, occlusion, compensatory strategies, and oculomotor training, appropriately targeted at specific types of visual conditions/symptoms. This information can be used to inform choice of core outcome orthoptic measures in stroke practice.

  18. Rotation elastogram: a novel method to visualize local rigid body rotation under quasi-static compression

    NASA Astrophysics Data System (ADS)

    Sowmiya, C.; Kothawala, Ali Arshad; Thittai, Arun K.

    2016-04-01

    During manual palpation of breast masses, the perception of its stiffness and slipperiness are the two commonly used information by the physician. In order to reliably and quantitatively obtain this information several non-invasive elastography techniques have been developed that seek to provide an image of the underlying mechanical properties, mostly stiffness-related. Very few approaches have visualized the "slip" at the lesion-background boundary that only occurs for a loosely-bonded benign lesion. It has been shown that axial-shear strain distribution provides information about underlying slip. One such feature, referred to as "fill-in" was interpreted as a surrogate of the rotation undergone by an asymmetrically-oriented-loosely bonded-benign-lesion under quasi-static compression. However, imaging and direct visualization of the rotation itself has not been addressed yet. In order to accomplish this, the quality of lateral displacement estimation needs to be improved. In this simulation study, we utilize spatial compounding approach and assess the feasibility to obtain good quality rotation elastogram. The angular axial and lateral displacement estimates were obtained at different insonification angles from a phantom containing an elliptical inclusion oriented at 45°, subjected to 1% compression from the top. A multilevel 2D-block matching algorithm was used for displacement tracking and 2D-least square compounding of angular axial and lateral displacement estimates was employed. By varying the maximum steering angle and incremental angle, the improvement in the lateral motion tracking accuracy and its effects on the quality of rotational elastogram were evaluated. Results demonstrate significantly-improved rotation elastogram using this technique.

  19. Visualization and dissemination of global crustal models on virtual globes

    NASA Astrophysics Data System (ADS)

    Zhu, Liang-feng; Pan, Xin; Sun, Jian-zhong

    2016-05-01

    Global crustal models, such as CRUST 5.1 and its descendants, are very useful in a broad range of geoscience applications. The current method for representing the existing global crustal models relies heavily on dedicated computer programs to read and work with those models. Therefore, it is not suited to visualize and disseminate global crustal information to non-geological users. This shortcoming is becoming obvious as more and more people from both academic and non-academic institutions are interested in understanding the structure and composition of the crust. There is a pressing need to provide a modern, universal and user-friendly method to represent and visualize the existing global crustal models. In this paper, we present a systematic framework to easily visualize and disseminate the global crustal structure on virtual globes. Based on crustal information exported from the existing global crustal models, we first create a variety of KML-formatted crustal models with different levels of detail (LODs). And then the KML-formatted models can be loaded into a virtual globe for 3D visualization and model dissemination. A Keyhole Markup Language (KML) generator (Crust2KML) is developed to automatically convert crustal information obtained from the CRUST 1.0 model into KML-formatted global crustal models, and a web application (VisualCrust) is designed to disseminate and visualize those models over the Internet. The presented framework and associated implementations can be conveniently exported to other applications to support visualizing and analyzing the Earth's internal structure on both regional and global scales in a 3D virtual-globe environment.

  20. Optoelectronic aid for patients with severely restricted visual fields in daylight conditions

    NASA Astrophysics Data System (ADS)

    Peláez-Coca, María Dolores; Sobrado-Calvo, Paloma; Vargas-Martín, Fernando

    2011-11-01

    In this study we evaluated the immediate effectiveness of an optoelectronic visual field expander in a sample of subjects with retinitis pigmentosa suffering from a severe peripheral visual field restriction. The aid uses the augmented view concept and provides subjects with visual information from outside their visual field. The tests were carried out in daylight conditions. The optoelectronic aid comprises a FPGA (real-time video processor), a wide-angle mini camera and a transparent see-through head-mounted display. This optoelectronic aid is called SERBA (Sistema Electro-óptico Reconfigurable de Ayuda para Baja Visión). We previously showed that, without compromising residual vision, the SERBA system provides information about objects within an area about three times greater on average than the remaining visual field of the subjects [1]. In this paper we address the effects of the device on mobility under daylight conditions with and without SERBA. The participants were six subjects with retinitis pigmentosa. In this mobility test, better results were obtained when subjects were wearing the SERBA system; specifically, both the number of contacts with low-level obstacles and mobility errors decreased significantly. A longer training period with the device might improve its usefulness.

  1. Visual field information in Nap-of-the-Earth flight by teleoperated Helmet-Mounted displays

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Kohn, S.; Merhav, S. J.

    1991-01-01

    The human ability to derive Control-Oriented Visual Field Information from teleoperated Helmet-Mounted displays in Nap-of-the-Earth flight, is investigated. The visual field with these types of displays originates from a Forward Looking Infrared Radiation Camera, gimbal-mounted at the front of the aircraft and slaved to the pilot's line-of-sight, to obtain wide-angle visual coverage. Although these displays are proved to be effective in Apache and Cobra helicopter night operations, they demand very high pilot proficiency and work load. Experimental work presented in the paper has shown that part of the difficulties encountered in vehicular control by means of these displays can be attributed to the narrow viewing aperture and head/camera slaving system phase lags. Both these shortcomings will impair visuo-vestibular coordination, when voluntary head rotation is present. This might result in errors in estimating the Control-Oriented Visual Field Information vital in vehicular control, such as the vehicle yaw rate or the anticipated flight path, or might even lead to visuo-vestibular conflicts (motion sickness). Since, under these conditions, the pilot will tend to minimize head rotation, the full wide-angle coverage of the Helmet-Mounted Display, provided by the line-of-sight slaving system, is not always fully utilized.

  2. Peripheral Processing Facilitates Optic Flow-Based Depth Perception

    PubMed Central

    Li, Jinglin; Lindemann, Jens P.; Egelhaaf, Martin

    2016-01-01

    Flying insects, such as flies or bees, rely on consistent information regarding the depth structure of the environment when performing their flight maneuvers in cluttered natural environments. These behaviors include avoiding collisions, approaching targets or spatial navigation. Insects are thought to obtain depth information visually from the retinal image displacements (“optic flow”) during translational ego-motion. Optic flow in the insect visual system is processed by a mechanism that can be modeled by correlation-type elementary motion detectors (EMDs). However, it is still an open question how spatial information can be extracted reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially if the vast range of light intensities encountered in natural environments is taken into account. This question will be addressed here by systematically modeling the peripheral visual system of flies, including various adaptive mechanisms. Different model variants of the peripheral visual system were stimulated with image sequences that mimic the panoramic visual input during translational ego-motion in various natural environments, and the resulting peripheral signals were fed into an array of EMDs. We characterized the influence of each peripheral computational unit on the representation of spatial information in the EMD responses. Our model simulations reveal that information about the overall light level needs to be eliminated from the EMD input as is accomplished under light-adapted conditions in the insect peripheral visual system. The response characteristics of large monopolar cells (LMCs) resemble that of a band-pass filter, which reduces the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness of objects and, especially, of their contours. We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light conditions. PMID:27818631

  3. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    NASA Astrophysics Data System (ADS)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  4. A New Definition for Ground Control

    NASA Technical Reports Server (NTRS)

    2002-01-01

    LandForm(R) VisualFlight(R) blends the power of a geographic information system with the speed of a flight simulator to transform a user's desktop computer into a "virtual cockpit." The software product, which is fully compatible with all Microsoft(R) Windows(R) operating systems, provides distributed, real-time three-dimensional flight visualization over a host of networks. From a desktop, a user can immediately obtain a cockpit view, a chase-plane view, or an airborne tracker view. A customizable display also allows the user to overlay various flight parameters, including latitude, longitude, altitude, pitch, roll, and heading information. Rapid Imaging Software sought assistance from NASA, and the VisualFlight technology came to fruition under a Phase II SBIR contract with Johnson Space Center in 1998. Three years later, on December 13, 2001, Ken Ham successfully flew NASA's X-38 spacecraft from a remote, ground-based cockpit using LandForm VisualFlight as part of his primary situation awareness display in a flight test at Edwards Air Force Base, California.

  5. Information-Driven Active Audio-Visual Source Localization

    PubMed Central

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619

  6. A new method for text detection and recognition in indoor scene for assisting blind people

    NASA Astrophysics Data System (ADS)

    Jabnoun, Hanen; Benzarti, Faouzi; Amiri, Hamid

    2017-03-01

    Developing assisting system of handicapped persons become a challenging ask in research projects. Recently, a variety of tools are designed to help visually impaired or blind people object as a visual substitution system. The majority of these tools are based on the conversion of input information into auditory or tactile sensory information. Furthermore, object recognition and text retrieval are exploited in the visual substitution systems. Text detection and recognition provides the description of the surrounding environments, so that the blind person can readily recognize the scene. In this work, we aim to introduce a method for detecting and recognizing text in indoor scene. The process consists on the detection of the regions of interest that should contain the text using the connected component. Then, the text detection is provided by employing the images correlation. This component of an assistive blind person should be simple, so that the users are able to obtain the most informative feedback within the shortest time.

  7. Effects of in-vehicle warning information displays with or without spatial compatibility on driving behaviors and response performance.

    PubMed

    Liu, Yung-Ching; Jhuang, Jing-Wun

    2012-07-01

    A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  8. Some Behavioral and Neurobiological Constraints on Theories of Audiovisual Speech Integration: A Review and Suggestions for New Directions

    PubMed Central

    Altieri, Nicholas; Pisoni, David B.; Townsend, James T.

    2012-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield’s feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration. PMID:21968081

  9. Some behavioral and neurobiological constraints on theories of audiovisual speech integration: a review and suggestions for new directions.

    PubMed

    Altieri, Nicholas; Pisoni, David B; Townsend, James T

    2011-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield's feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration.

  10. Visual grouping under isoluminant condition: impact of mental fatigue

    NASA Astrophysics Data System (ADS)

    Pladere, Tatjana; Bete, Diana; Skilters, Jurgis; Krumina, Gunta

    2016-09-01

    Instead of selecting arbitrary elements our visual perception prefers only certain grouping of information. There is ample evidence that the visual attention and perception is substantially impaired in the presence of mental fatigue. The question is how visual grouping, which can be considered a bottom-up controlled neuronal gain mechanism, is influenced. The main purpose of our study is to determine the influence of mental fatigue on visual grouping of definite information - color and configuration of stimuli in the psychophysical experiment. Individuals provided subjective data by filling in the questionnaire about their health and general feeling. The objective evidence was obtained in the specially designed visual search task were achromatic and chromatic isoluminant stimuli were used in order to avoid so called pop-out effect due to differences in light intensity. Each individual was instructed to define the symbols with aperture in the same direction in four tasks. The color component differed in the visual search tasks according to the goals of study. The results reveal that visual grouping is completed faster when visual stimuli have the same color and aperture direction. The shortest reaction time is in the evening. What is more, the results of reaction time suggest that the analysis of two grouping processes compete for selective attention in the visual system when similarity in color conflicts with similarity in configuration of stimuli. The described effect increases significantly in the presence of mental fatigue. But it does not have strong influence on the accuracy of task accomplishment.

  11. Dual-Tree Complex Wavelet Transform and Image Block Residual-Based Multi-Focus Image Fusion in Visual Sensor Networks

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878

  12. Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks.

    PubMed

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-11-26

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.

  13. Information-Theoretic Metrics for Visualizing Gene-Environment Interactions

    PubMed Central

    Chanda, Pritam ; Zhang, Aidong ; Brazeau, Daniel ; Sucheston, Lara ; Freudenheim, Jo L. ; Ambrosone, Christine ; Ramanathan, Murali 

    2007-01-01

    The purpose of our work was to develop heuristics for visualizing and interpreting gene-environment interactions (GEIs) and to assess the dependence of candidate visualization metrics on biological and study-design factors. Two information-theoretic metrics, the k-way interaction information (KWII) and the total correlation information (TCI), were investigated. The effectiveness of the KWII and TCI to detect GEIs in a diverse range of simulated data sets and a Crohn disease data set was assessed. The sensitivity of the KWII and TCI spectra to biological and study-design variables was determined. Head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and the pedigree disequilibrium test (PDT) methods were obtained. The KWII and TCI spectra, which are graphical summaries of the KWII and TCI for each subset of environmental and genotype variables, were found to detect each known GEI in the simulated data sets. The patterns in the KWII and TCI spectra were informative for factors such as case-control misassignment, locus heterogeneity, allele frequencies, and linkage disequilibrium. The KWII and TCI spectra were found to have excellent sensitivity for identifying the key disease-associated genetic variations in the Crohn disease data set. In head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and PDT methods, the results from visual interpretation of the KWII and TCI spectra performed satisfactorily. The KWII and TCI are promising metrics for visualizing GEIs. They are capable of detecting interactions among numerous single-nucleotide polymorphisms and environmental variables for a diverse range of GEI models. PMID:17924337

  14. CMIS: Crime Map Information System for Safety Environment

    NASA Astrophysics Data System (ADS)

    Kasim, Shahreen; Hafit, Hanayanti; Yee, Ng Peng; Hashim, Rathiah; Ruslai, Husni; Jahidin, Kamaruzzaman; Syafwan Arshad, Mohammad

    2016-11-01

    Crime Map is an online web based geographical information system that assists the public and users to visualize crime activities geographically. It acts as a platform for the public communities to share crime activities they encountered. Crime and violence plague the communities we are living in. As part of the community, crime prevention is everyone's responsibility. The purpose of Crime Map is to provide insights of the crimes occurring around Malaysia and raise the public's awareness on crime activities in their neighbourhood. For that, Crime Map visualizes crime activities on a geographical heat maps, generated based on geospatial data. Crime Map analyse data obtained from crime reports to generate useful information on crime trends. At the end of the development, users should be able to make use of the system to access to details of crime reported, crime analysis and report crimes activities. The development of Crime Map also enable the public to obtain insights about crime activities in their area. Thus, enabling the public to work together with the law enforcer to prevent and fight crime.

  15. Analysis of Human Mobility Based on Cellular Data

    NASA Astrophysics Data System (ADS)

    Arifiansyah, F.; Saptawati, G. A. P.

    2017-01-01

    Nowadays not only adult but even teenager and children have then own mobile phones. This phenomena indicates that the mobile phone becomes an important part of everyday’s life. Based on these indication, the amount of cellular data also increased rapidly. Cellular data defined as the data that records communication among mobile phone users. Cellular data is easy to obtain because the telecommunications company had made a record of the data for the billing system of the company. Billing data keeps a log of the users cellular data usage each time. We can obtained information from the data about communication between users. Through data visualization process, an interesting pattern can be seen in the raw cellular data, so that users can obtain prior knowledge to perform data analysis. Cellular data processing can be done using data mining to find out human mobility patterns and on the existing data. In this paper, we use frequent pattern mining and finding association rules to observe the relation between attributes in cellular data and then visualize them. We used weka tools for finding the rules in stage of data mining. Generally, the utilization of cellular data can provide supporting information for the decision making process and become a data support to provide solutions and information needed by the decision makers.

  16. Combining textual and visual information for image retrieval in the medical domain.

    PubMed

    Gkoufas, Yiannis; Morou, Anna; Kalamboukis, Theodore

    2011-01-01

    In this article we have assembled the experience obtained from our participation in the imageCLEF evaluation task over the past two years. Exploitation on the use of linear combinations for image retrieval has been attempted by combining visual and textual sources of images. From our experiments we conclude that a mixed retrieval technique that applies both textual and visual retrieval in an interchangeably repeated manner improves the performance while overcoming the scalability limitations of visual retrieval. In particular, the mean average precision (MAP) has increased from 0.01 to 0.15 and 0.087 for 2009 and 2010 data, respectively, when content-based image retrieval (CBIR) is performed on the top 1000 results from textual retrieval based on natural language processing (NLP).

  17. Crawling and walking infants see the world differently

    PubMed Central

    Kretch, Kari S.; Franchak, John M.; Adolph, Karen E.

    2013-01-01

    How does visual experience change over development? To investigate changes in visual input over the developmental transition from crawling to walking, thirty 13-month-olds crawled or walked down a straight path wearing a head-mounted eye-tracker that recorded gaze direction and head-centered field of view. Thirteen additional infants wore a motion-tracker that recorded head orientation. Compared with walkers, crawlers’ field of view contained less walls and more floor. Walkers directed gaze straight ahead at caregivers, whereas crawlers looked down at the floor. Crawlers obtained visual information about targets at higher elevations—caregivers and toys—by craning their heads upward and sitting up to bring the room into view. Findings indicate that visual experiences are intimately tied to infants’ posture. PMID:24341362

  18. Genetic parameters for type classification of Nelore cattle on central performance tests at pasture in Brazil.

    PubMed

    Lima, Paulo Ricardo Martins; Paiva, Samuel Rezende; Cobuci, Jaime Araujo; Braccini Neto, José; Machado, Carlos Henrique Cavallari; McManus, Concepta

    2013-10-01

    The objective of this study was to characterize Nelore cattle on central performance tests in pasture, ranked by the visual classification method EPMURAS (structure, precocity, muscle, navel, breed, posture, and sexual characteristics), and to estimate genetic and phenotypic correlations between these parameters, including visual as well as production traits (initial and final weight on test, weight gain, and weight corrected for 550 days). The information used in the study was obtained on 21,032 Nelore bulls which were participants in the central performance test at pasture of the Brazilian Association for Zebu Breeders (ABCZ). Heritabilities obtained were from 0.19 to 0.50. Phenotypic correlations were positive from 0.70 to 0.97 between the weight traits, from 0.65 to 0.74 between visual characteristics, and from 0.29 to 0.47 between visual characteristics and weight traits. The genetic correlations were positive ranging from 0.80 to 0.98 between the characteristics of structure, precocity and musculature, from 0.13 to 0.64 between the growth characteristics, and from 0.41 to 0.97 between visual scores and weight gains. Heritability and genetic correlations indicate that the use of visual scores, along with the selection for growth characteristics, can bring positive results in selection of beef cattle for rearing on pasture.

  19. Rationale and description of a coordinated cockpit display for aircraft flight management

    NASA Technical Reports Server (NTRS)

    Baty, D. L.

    1976-01-01

    The design for aircraft cockpit display systems is discussed in detail. The system consists of a set of three beam penetration color cathode ray tubes (CRT). One of three orthogonal projects of the aircraft's state appears on each CRT which displays different views of the same information. The color feature is included to obtain visual separation of information elements. The colors of red, green and yellow are used to differentiate control, performance and navigation information. Displays are coordinated in information and color.

  20. How Therapists Use Visualizations of Upper Limb Movement Information From Stroke Patients: A Qualitative Study With Simulated Information

    PubMed Central

    Fong, Justin; Klaic, Marlena; Nair, Siddharth; Vetere, Frank; Cofré Lizama, L. Eduardo; Galea, Mary Pauline

    2016-01-01

    Background Stroke is a leading cause of disability worldwide, with upper limb deficits affecting an estimated 30% to 60% of survivors. The effectiveness of upper limb rehabilitation relies on numerous factors, particularly patient compliance to home programs and exercises set by therapists. However, therapists lack objective information about their patients’ adherence to rehabilitation exercises as well as other uses of the affected arm and hand in everyday life outside the clinic. We developed a system that consists of wearable sensor technology to monitor a patient’s arm movement and a Web-based dashboard to visualize this information for therapists. Objective The aim of our study was to evaluate how therapists use upper limb movement information visualized on a dashboard to support the rehabilitation process. Methods An interactive dashboard prototype with simulated movement information was created and evaluated through a user-centered design process with therapists (N=8) at a rehabilitation clinic. Data were collected through observations of therapists interacting with an interactive dashboard prototype, think-aloud data, and interviews. Data were analyzed qualitatively through thematic analysis. Results Therapists use visualizations of upper limb information in the following ways: (1) to obtain objective data of patients’ activity levels, exercise, and neglect outside the clinic, (2) to engage patients in the rehabilitation process through education, motivation, and discussion of experiences with activities of daily living, and (3) to engage with other clinicians and researchers based on objective data. A major limitation is the lack of contextual data, which is needed by therapists to discern how movement data visualized on the dashboard relate to activities of daily living. Conclusions Upper limb information captured through wearable devices provides novel insights for therapists and helps to engage patients and other clinicians in therapy. Consideration needs to be given to the collection and visualization of contextual information to provide meaningful insights into patient engagement in activities of daily living. These findings open the door for further work to develop a fully functioning system and to trial it with patients and clinicians during therapy. PMID:28582257

  1. Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.

    PubMed

    Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B

    2003-04-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

  2. Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants

    PubMed Central

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2012-01-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380

  3. Quantifying and visualizing variations in sets of images using continuous linear optimal transport

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil; Rohde, Gustavo K.

    2014-03-01

    Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.

  4. Measurement Tools for the Immersive Visualization Environment: Steps Toward the Virtual Laboratory.

    PubMed

    Hagedorn, John G; Dunkers, Joy P; Satterfield, Steven G; Peskin, Adele P; Kelso, John T; Terrill, Judith E

    2007-01-01

    This paper describes a set of tools for performing measurements of objects in a virtual reality based immersive visualization environment. These tools enable the use of the immersive environment as an instrument for extracting quantitative information from data representations that hitherto had be used solely for qualitative examination. We provide, within the virtual environment, ways for the user to analyze and interact with the quantitative data generated. We describe results generated by these methods to obtain dimensional descriptors of tissue engineered medical products. We regard this toolbox as our first step in the implementation of a virtual measurement laboratory within an immersive visualization environment.

  5. Matching multiple rigid domain decompositions of proteins

    PubMed Central

    Flynn, Emily; Streinu, Ileana

    2017-01-01

    We describe efficient methods for consistently coloring and visualizing collections of rigid cluster decompositions obtained from variations of a protein structure, and lay the foundation for more complex setups that may involve different computational and experimental methods. The focus here is on three biological applications: the conceptually simpler problems of visualizing results of dilution and mutation analyses, and the more complex task of matching decompositions of multiple NMR models of the same protein. Implemented into the KINARI web server application, the improved visualization techniques give useful information about protein folding cores, help examining the effect of mutations on protein flexibility and function, and provide insights into the structural motions of PDB proteins solved with solution NMR. These tools have been developed with the goal of improving and validating rigidity analysis as a credible coarse-grained model capturing essential information about a protein’s slow motions near the native state. PMID:28141528

  6. 3D Online Visualization and Synergy of NASA A-Train Data Using Google Earth

    NASA Technical Reports Server (NTRS)

    Chen, Aijun; Kempler, Steven; Leptoukh, Gregory; Smith, Peter

    2010-01-01

    This poster presentation reviews the use of Google Earth to assist in three dimensional online visualization of NASA Earth science and geospatial data. The NASA A-Train satellite constellation is a succession of seven sun-synchronous orbit satellites: (1) OCO-2 (Orbiting Carbon Observatory) (will launch in Feb. 2013), (2) GCOM-W1 (Global Change Observation Mission), (3) Aqua, (4) CloudSat, (5) CALIPSO (Cloud-Aerosol Lidar & Infrared Pathfinder Satellite Observations), (6) Glory, (7) Aura. The A-Train makes possible synergy of information from multiple resources, so more information about earth condition is obtained from the combined observations than would be possible from the sum of the observations taken independently

  7. How social media meet patients’ questions: YouTube™ review for children oral thrush.

    PubMed

    Di Stasio, D; Romano, A N; Paparella, R S; Gentile, C; Minervini, G; Serpico, R; Candotto, V; Laino, L

    2018-01-01

    YouTube™ is increasingly being used by patients to obtain health-related information. No studies have evaluated the content of YouTube™ videos on children oral thrush. The aim of this work is to examine the quality of information offered by this platform about oral thrush in children. Searching term “oral thrush in children” (OTC) displayed a total of 2.790 results. Of the top 60 videos analyzed, 27 were excluded. The main source of upload was from generalist information YouTube® channels (GC) followed by healthcare professionals (HP), individual users (IU), and healthcare information channels (HC); usefulness of videos is successfully correlated with the number of visualization, number of likes and viewing rate and was interdependent with the number of visualizations, number of likes and VR. However, videos on the oral thrush do not have satisfactory quality information. HP themselves, along with HC, do not seem to provide more appropriate information on COT, than GC or IU.

  8. Live weight, carcass ultrasound images, and visual scores in Angus cattle under feeding regimes in Brazil.

    PubMed

    Pinto, Luís Fernando Batista; Tarouco, Jaime Urdapilleta; Pedrosa, Victor Breno; de Farias Jucá, Adriana; Leão, André Gustavo; Moita, Antonia Kécya França

    2013-08-01

    This study aimed to evaluate visual precocity, muscling, conformation, skeletal, and breed scores; live weights at birth, at 205, and at 550 days of age; and, besides, rib eye area and fat thickness between the 12th and 13th ribs obtained by ultrasound. Those traits were evaluated in 1,645 Angus cattle kept in five feeding conditions as follows: supplemented or non-supplemented, grazing native pasture or grazing cultivated pasture, and feedlot. Descriptive statistics, Pearson's correlations, and principal component analysis were carried out. Gender and feeding conditions were fixed effects, while animal's age and mother's weight at weaning were the covariates analyzed. Gender and feeding conditions were very significant for the studied traits, but visual scores were not influenced by gender. Animal's age and mother's weight at weaning influenced many traits and must be appropriately adjusted in the statistical models. An important correlation between visual scores, live weights, and carcass traits obtained by ultrasound was found, which can be analyzed by univariate procedure. However, the multivariate approach revealed some information that cannot be neglected in order to ensure a more detailed assessment.

  9. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    NASA Astrophysics Data System (ADS)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  10. Intermittently-visual Tracking Experiments Reveal the Roles of Error-correction and Predictive Mechanisms in the Human Visual-motor Control System

    NASA Astrophysics Data System (ADS)

    Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji

    Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

  11. Clinically Meaningful Rehabilitation Outcomes of Low Vision Patients Served by Outpatient Clinical Centers.

    PubMed

    Goldstein, Judith E; Jackson, Mary Lou; Fox, Sandra M; Deremeik, James T; Massof, Robert W

    2015-07-01

    To facilitate comparative clinical outcome research in low vision rehabilitation, we must use patient-centered measurements that reflect clinically meaningful changes in visual ability. To quantify the effects of currently provided low vision rehabilitation (LVR) on patients who present for outpatient LVR services in the United States. Prospective, observational study of new patients seeking outpatient LVR services. From April 2008 through May 2011, 779 patients from 28 clinical centers in the United States were enrolled in the Low Vision Rehabilitation Outcomes Study. The Activity Inventory, a visual function questionnaire, was administered to measure overall visual ability and visual ability in 4 functional domains (reading, mobility, visual motor function, and visual information processing) at baseline and 6 to 9 months after usual LVR care. The Geriatric Depression Scale, Telephone Interview for Cognitive Status, and Medical Outcomes Study 36-Item Short-Form Health Survey physical functioning questionnaires were also administered to measure patients' psychological, cognitive, and physical health states, respectively, and clinical findings of patients were provided by study centers. Mean changes in the study population and minimum clinically important differences in the individual in overall visual ability and in visual ability in 4 functional domains as measured by the Activity Inventory. Baseline and post-rehabilitation measures were obtained for 468 patients. Minimum clinically important differences (95% CIs) were observed in nearly half (47% [95% CI, 44%-50%]) of patients in overall visual ability. The prevalence rates of patients with minimum clinically important differences in visual ability in functional domains were reading (44% [95% CI, 42%-48%]), visual motor function (38% [95% CI, 36%-42%]), visual information processing (33% [95% CI, 31%-37%]), and mobility (27% [95% CI, 25%-31%]). The largest average effect size (Cohen d = 0.87) for the population was observed in overall visual ability. Age (P = .006) was an independent predictor of changes in overall visual ability, and logMAR visual acuity (P = .002) was predictive of changes in visual information processing. Forty-four to fifty percent of patients presenting for outpatient LVR show clinically meaningful differences in overall visual ability after LVR, and the average effect sizes in overall visual ability are large, close to 1 SD.

  12. Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots.

    PubMed

    Chen, Jian; Jia, Bingxi; Zhang, Kaixiang

    2017-11-01

    In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.

  13. A comparison of different category scales for estimating disease severity

    USDA-ARS?s Scientific Manuscript database

    Plant pathologists most often obtain quantitative information on disease severity using visual assessments. Category scales are widely used for assessing disease severity, including for screening germplasm. The most widely used category scale is the Horsfall-Barratt (H-B) scale, but reports show tha...

  14. Visualization of the 3-D topography of the optic nerve head through a passive stereo vision model

    NASA Astrophysics Data System (ADS)

    Ramirez, Juan M.; Mitra, Sunanda; Morales, Jose

    1999-01-01

    This paper describes a system for surface recovery and visualization of the 3D topography of the optic nerve head, as support of early diagnosis and follow up to glaucoma. In stereo vision, depth information is obtained from triangulation of corresponding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement technique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse-to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This methodology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy.

  15. Laser-induced fluorescence imaging of subsurface tissue structures with a volume holographic spatial-spectral imaging system.

    PubMed

    Luo, Yuan; Gelsinger-Austin, Paul J; Watson, Jonathan M; Barbastathis, George; Barton, Jennifer K; Kostuk, Raymond K

    2008-09-15

    A three-dimensional imaging system incorporating multiplexed holographic gratings to visualize fluorescence tissue structures is presented. Holographic gratings formed in volume recording materials such as a phenanthrenquinone poly(methyl methacrylate) photopolymer have narrowband angular and spectral transmittance filtering properties that enable obtaining spatial-spectral information within an object. We demonstrate this imaging system's ability to obtain multiple depth-resolved fluorescence images simultaneously.

  16. Analysis of total visual and ccd v-broadband observation of comet c/1995 o1 (hale-bopp): 1995-2003

    NASA Astrophysics Data System (ADS)

    de Almeida, A. A.; Boczko, R.; Lopes, A. R.; Sanzovo, G. C.

    The wealth of available information on total visual magnitudes and broadband-V CCD observations of the exceptionally bright Comet C/1995 O1 (Hale-Bopp) proved to be an excellent opportunity to test the Semi-Empirical Method of Visual Magnitudes (de Almeida, Singh & Huebner, 1997) for very bright comets. The main objective is to extend the method to include total visual magnitude observations obtained with CCD detector and V filter in our analysis of total visual magnitudes and obtain a single light curve. We compare the CCD V-broadband careful observations of Liller (1997) by plotting then together with the total visual magnitude observations from experienced visual observers found in the International Comet Quarterly (ICQ) archive. We find a nice agreement despite of the fact that CCDs and V filter passbands detect systematically more coma than visual observers, since they have different responses to C2, which is the main emission from the coma, and consequently they should be used with larger apperture diameters. A data set of ˜400 CCD selected observations covering about the same 5 years time span of the ˜12,000 ICQ total visual magnitude observations were used in the analysis. A least-squares fit to the values yielded a relation for water production rates vs heliocentric distances for the pre- and post-perihelion phases and are converted into gas production rates (in g/s) released by the nucleus. The dimension of the nucleus as well as its effective active area is determined and compared to other works.

  17. A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science Phenomena.

    NASA Astrophysics Data System (ADS)

    Prabhu, A.; Zednik, S.; Fox, P. A.; Ramachandran, R.; Maskey, M.; Shie, C. L.; Shen, S.

    2016-12-01

    Current Earth Science Information Systems lack support for new or interdisciplinary researchers, who may be unfamiliar with the domain vocabulary or the breadth of relevant data available. We need to evolve the current information systems, to reduce the time required for data preparation, processing and analysis. This can be done by effectively salvaging the "dark" resources in Earth Science. We assert that Earth science metadata assets are dark resources, information resources that organizations collect, process, and store for regular business or operational activities but fail to utilize for other purposes. In order to effectively use these dark resources, especially for data processing and visualization, we need a combination of domain, data product and processing knowledge, i.e. a knowledge base from which specific data operations can be performed. In this presentation, we describe a semantic, rules based approach to provide i.e. a service to visualize Earth Science phenomena, based on the data variables extracted using the "dark" metadata resources. We use Jena rules to make assertions about compatibility between a phenomena and various visualizations based on multiple factors. We created separate orthogonal rulesets to map each of these factors to the various phenomena. Some of the factors we have considered include measurements, spatial resolution and time intervals. This approach enables easy additions and deletions based on newly obtained domain knowledge or phenomena related information and thus improving the accuracy of the rules service overall.

  18. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  19. Evaluation of tactual displays for flight control

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Tanner, R. B.; Triggs, T. J.

    1973-01-01

    Manual tracking experiments were conducted to determine the suitability of tactual displays for presenting flight-control information in multitask situations. Although tracking error scores are considerably greater than scores obtained with a continuous visual display, preliminary results indicate that inter-task interference effects are substantially less with the tactual display in situations that impose high visual scanning workloads. The single-task performance degradation found with the tactual display appears to be a result of the coding scheme rather than the use of the tactual sensory mode per se. Analysis with the state-variable pilot/vehicle model shows that reliable predictions of tracking errors can be obtained for wide-band tracking systems once the pilot-related model parameters have been adjusted to reflect the pilot-display interaction.

  20. The simplest chronoscope II: reaction time measured by meterstick versus machine.

    PubMed

    Montare, Alberto

    2010-12-01

    Visual simple reaction time (SRT) scores measured in 31 college students of both sexes by use of the simplest chronoscope methodology (meterstick SRT) were compared to scores obtained by use of an electromechanical multi-choice reaction timer (machine SRT). Four hypotheses were tested. Results indicated that the previous mean value of meterstick SRT was replicated; meterstick SRT was significantly faster than long-standing population estimates of mean SRT; and machine SRT was significantly slower than the same long-standing mean SRT estimates for the population. Also, the mean meterstick SRT of 181 msec. was significantly faster than the mean machine SRT of 294 msec. It was theorized that differential visual information processing occurred such that the dorsal visual stream subserved meterstick SRT; whereas the ventral visual stream subserved machine SRT.

  1. Dynamic visual attention: motion direction versus motion magnitude

    NASA Astrophysics Data System (ADS)

    Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.

    2008-02-01

    Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.

  2. Build a Robust Learning Feature Descriptor by Using a New Image Visualization Method for Indoor Scenario Recognition

    PubMed Central

    Wang, Xin; Deng, Zhongliang

    2017-01-01

    In order to recognize indoor scenarios, we extract image features for detecting objects, however, computers can make some unexpected mistakes. After visualizing the histogram of oriented gradient (HOG) features, we find that the world through the eyes of a computer is indeed different from human eyes, which assists researchers to see the reasons that cause a computer to make errors. Additionally, according to the visualization, we notice that the HOG features can obtain rich texture information. However, a large amount of background interference is also introduced. In order to enhance the robustness of the HOG feature, we propose an improved method for suppressing the background interference. On the basis of the original HOG feature, we introduce a principal component analysis (PCA) to extract the principal components of the image colour information. Then, a new hybrid feature descriptor, which is named HOG–PCA (HOGP), is made by deeply fusing these two features. Finally, the HOGP is compared to the state-of-the-art HOG feature descriptor in four scenes under different illumination. In the simulation and experimental tests, the qualitative and quantitative assessments indicate that the visualizing images of the HOGP feature are close to the observation results obtained by human eyes, which is better than the original HOG feature for object detection. Furthermore, the runtime of our proposed algorithm is hardly increased in comparison to the classic HOG feature. PMID:28677635

  3. Access to Awareness for Faces during Continuous Flash Suppression Is Not Modulated by Affective Knowledge

    PubMed Central

    Rabovsky, Milena; Stein, Timo; Abdel Rahman, Rasha

    2016-01-01

    It is a controversially debated topic whether stimuli can be analyzed up to the semantic level when they are suppressed from visual awareness during continuous flash suppression (CFS). Here, we investigated whether affective knowledge, i.e., affective biographical information about faces, influences the time it takes for initially invisible faces with neutral expressions to overcome suppression and break into consciousness. To test this, we used negative, positive, and neutral famous faces as well as initially unfamiliar faces, which were associated with negative, positive or neutral biographical information. Affective knowledge influenced ratings of facial expressions, corroborating recent evidence and indicating the success of our affective learning paradigm. Furthermore, we replicated shorter suppression durations for upright than for inverted faces, demonstrating the suitability of our CFS paradigm. However, affective biographical information did not modulate suppression durations for newly learned faces, and even though suppression durations for famous faces were influenced by affective knowledge, these effects did not differ between upright and inverted faces, indicating that they might have been due to low-level visual differences. Thus, we did not obtain unequivocal evidence for genuine influences of affective biographical information on access to visual awareness for faces during CFS. PMID:27119743

  4. Neural mechanisms of human perceptual choice under focused and divided attention.

    PubMed

    Wyart, Valentin; Myers, Nicholas E; Summerfield, Christopher

    2015-02-25

    Perceptual decisions occur after the evaluation and integration of momentary sensory inputs, and dividing attention between spatially disparate sources of information impairs decision performance. However, it remains unknown whether dividing attention degrades the precision of sensory signals, precludes their conversion into decision signals, or dampens the integration of decision information toward an appropriate response. Here we recorded human electroencephalographic (EEG) activity while participants categorized one of two simultaneous and independent streams of visual gratings according to their average tilt. By analyzing trial-by-trial correlations between EEG activity and the information offered by each sample, we obtained converging behavioral and neural evidence that dividing attention between left and right visual fields does not dampen the encoding of sensory or decision information. Under divided attention, momentary decision information from both visual streams was encoded in slow parietal signals without interference but was lost downstream during their integration as reflected in motor mu- and beta-band (10-30 Hz) signals, resulting in a "leaky" accumulation process that conferred greater behavioral influence to more recent samples. By contrast, sensory inputs that were explicitly cued as irrelevant were not converted into decision signals. These findings reveal that a late cognitive bottleneck on information integration limits decision performance under divided attention, and places new capacity constraints on decision-theoretic models of information integration under cognitive load. Copyright © 2015 the authors 0270-6474/15/353485-14$15.00/0.

  5. Neural mechanisms of human perceptual choice under focused and divided attention

    PubMed Central

    Wyart, Valentin; Myers, Nicholas E.; Summerfield, Christopher

    2015-01-01

    Perceptual decisions occur after evaluation and integration of momentary sensory inputs, and dividing attention between spatially disparate sources of information impairs decision performance. However, it remains unknown whether dividing attention degrades the precision of sensory signals, precludes their conversion into decision signals, or dampens the integration of decision information towards an appropriate response. Here we recorded human electroencephalographic (EEG) activity whilst participants categorised one of two simultaneous and independent streams of visual gratings according to their average tilt. By analyzing trial-by-trial correlations between EEG activity and the information offered by each sample, we obtained converging behavioural and neural evidence that dividing attention between left and right visual fields does not dampen the encoding of sensory or decision information. Under divided attention, momentary decision information from both visual streams was encoded in slow parietal signals without interference but was lost downstream during their integration as reflected in motor mu- and beta-band (10–30 Hz) signals, resulting in a ‘leaky’ accumulation process which conferred greater behavioural influence to more recent samples. By contrast, sensory inputs that were explicitly cued as irrelevant were not converted into decision signals. These findings reveal that a late cognitive bottleneck on information integration limits decision performance under divided attention, and place new capacity constraints on decision-theoretic models of information integration under cognitive load. PMID:25716848

  6. Task-technology fit of video telehealth for nurses in an outpatient clinic setting.

    PubMed

    Cady, Rhonda G; Finkelstein, Stanley M

    2014-07-01

    Incorporating telehealth into outpatient care delivery supports management of consumer health between clinic visits. Task-technology fit is a framework for understanding how technology helps and/or hinders a person during work processes. Evaluating the task-technology fit of video telehealth for personnel working in a pediatric outpatient clinic and providing care between clinic visits ensures the information provided matches the information needed to support work processes. The workflow of advanced practice registered nurse (APRN) care coordination provided via telephone and video telehealth was described and measured using a mixed-methods workflow analysis protocol that incorporated cognitive ethnography and time-motion study. Qualitative and quantitative results were merged and analyzed within the task-technology fit framework to determine the workflow fit of video telehealth for APRN care coordination. Incorporating video telehealth into APRN care coordination workflow provided visual information unavailable during telephone interactions. Despite additional tasks and interactions needed to obtain the visual information, APRN workflow efficiency, as measured by time, was not significantly changed. Analyzed within the task-technology fit framework, the increased visual information afforded by video telehealth supported the assessment and diagnostic information needs of the APRN. Telehealth must provide the right information to the right clinician at the right time. Evaluating task-technology fit using a mixed-methods protocol ensured rigorous analysis of fit within work processes and identified workflows that benefit most from the technology.

  7. Study of the cerrado vegetation in the Federal District area from orbital data. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Aoki, H.; Dossantos, J. R.

    1980-01-01

    The physiognomic units of cerrado in the area of Distrito Federal (DF) were studied through the visual and automatic analysis of products provided by Multispectral Scanning System (MSS) of LANDSAT. The visual analysis of the multispectral images in black and white, at the 1:250,000 scale, was made based on the texture and tonal patterns. The automatic analysis of the compatible computer tapes (CCT) was made by means of IMAGE-100 system. The following conclusions were obtained: (1) the delimitation of cerrado vegetation forms can be made by the visual and automatic analysis; (2) in the visual analysis, the principal parameter used to discriminate the cerrado forms was the tonal pattern, independently of the year's seasons, and the channel 5 gave better information; (3) in the automatic analysis, the data of the four channels of MSS can be used in the discrimination of the cerrado forms; and (4) in the automatic analysis, the four channels combination possibilities gave more information in the separation of cerrado units when soil types were considered.

  8. Faceted Visualization of Three Dimensional Neuroanatomy By Combining Ontology with Faceted Search

    PubMed Central

    Veeraraghavan, Harini; Miller, James V.

    2013-01-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset. PMID:24006207

  9. Faceted visualization of three dimensional neuroanatomy by combining ontology with faceted search.

    PubMed

    Veeraraghavan, Harini; Miller, James V

    2014-04-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset.

  10. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space

    PubMed Central

    Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.

    2017-01-01

    Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382

  11. Evaluation of afferent pain pathways in adrenomyeloneuropathic patients.

    PubMed

    Yagüe, Sara; Veciana, Misericordia; Casasnovas, Carlos; Ruiz, Montserrat; Pedro, Jordi; Valls-Solé, Josep; Pujol, Aurora

    2018-03-01

    Patients with adrenomyeloneuropathy may have dysfunctions of visual, auditory, motor and somatosensory pathways. We thought on examining the nociceptive pathways by means of laser evoked potentials (LEPs), to obtain additional information on the pathophysiology of this condition. In 13 adrenomyeloneuropathic patients we examined LEPs to leg, arm and face stimulation. Normative data were obtained from 10 healthy subjects examined in the same experimental conditions. We also examined brainstem auditory evoked potentials (BAEPs), pattern reversal full-field visual evoked potentials (VEPs), motor evoked potentials (MEPs) and somatosensory evoked potentials (SEPs). Upper and lower limb MEPs and SEPs, as well as BAEPs, were abnormal in all patients, while VEPs were abnormal in 3 of them (23.1%). LEPs revealed abnormalities to stimulation of the face in 4 patients (30.7%), the forearm in 4 patients (30.7%) and the leg in 10 patients (76.9%). The pathologic process of adrenomyeloneuropathy is characterized by a preferential involvement of auditory, motor and somatosensory tracts and less severely of the visual and nociceptive pathways. This non-inflammatory distal axonopathy preferably damages large myelinated spinal tracts but there is also partial involvement of small myelinated fibres. LEPs studies can provide relevant information about afferent pain pathways involvement in adrenomyeloneuropathic patients. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  12. Qualitative Examination of Children's Naming Skills through Test Adaptations.

    ERIC Educational Resources Information Center

    Fried-Oken, Melanie

    1987-01-01

    The Double Administration Naming Technique assists clinicians in obtaining qualitative information about a client's visual confrontation naming skills through administration of a standard naming test; readministration of the same test; identification of single and double errors; cuing for double naming errors; and qualitative analysis of naming…

  13. Task-dependent engagements of the primary visual cortex during kinesthetic and visual motor imagery.

    PubMed

    Mizuguchi, Nobuaki; Nakamura, Maiko; Kanosue, Kazuyuki

    2017-01-01

    Motor imagery can be divided into kinesthetic and visual aspects. In the present study, we investigated excitability in the corticospinal tract and primary visual cortex (V1) during kinesthetic and visual motor imagery. To accomplish this, we measured motor evoked potentials (MEPs) and probability of phosphene occurrence during the two types of motor imageries of finger tapping. The MEPs and phosphenes were induced by transcranial magnetic stimulation to the primary motor cortex and V1, respectively. The amplitudes of MEPs and probability of phosphene occurrence during motor imagery were normalized based on the values obtained at rest. Corticospinal excitability increased during both kinesthetic and visual motor imagery, while excitability in V1 was increased only during visual motor imagery. These results imply that modulation of cortical excitability during kinesthetic and visual motor imagery is task dependent. The present finding aids in the understanding of the neural mechanisms underlying motor imagery and provides useful information for the use of motor imagery in rehabilitation or motor imagery training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Using open-source programs to create a web-based portal for hydrologic information

    NASA Astrophysics Data System (ADS)

    Kim, H.

    2013-12-01

    Some hydrologic data sets, such as basin climatology, precipitation, and terrestrial water storage, are not easily obtainable and distributable due to their size and complexity. We present a Hydrologic Information Portal (HIP) that has been implemented at the University of California for Hydrologic Modeling (UCCHM) and that has been organized around the large river basins of North America. This portal can be easily accessed through a modern web browser that enables easy access and visualization of such hydrologic data sets. Some of the main features of our HIP include a set of data visualization features so that users can search, retrieve, analyze, integrate, organize, and map data within large river basins. Recent information technologies such as Google Maps, Tornado (Python asynchronous web server), NumPy/SciPy (Scientific Library for Python) and d3.js (Visualization library for JavaScript) were incorporated into the HIP to create ease in navigating large data sets. With such open source libraries, HIP can give public users a way to combine and explore various data sets by generating multiple chart types (Line, Bar, Pie, Scatter plot) directly from the Google Maps viewport. Every rendered object such as a basin shape on the viewport is clickable, and this is the first step to access the visualization of data sets.

  15. Receptive Fields and the Reconstruction of Visual Information.

    DTIC Science & Technology

    1985-09-01

    depending on the noise . Thus our model would suggest that the interpolation filters for deblurring are playing a role in Ii hyperacuity. This is novel...of additional precision in the information can be obtained by a process of deblurring , which could be relevant to hyperacuity. It also provides an... impulse of heat diffuses into increasingly larger Gaussian distributions as time proceeds. Mathematically, let f(x) denote the initial temperature

  16. Image processing for hazard recognition in on-board weather radar

    NASA Technical Reports Server (NTRS)

    Kelly, Wallace E. (Inventor); Rand, Timothy W. (Inventor); Uckun, Serdar (Inventor); Ruokangas, Corinne C. (Inventor)

    2003-01-01

    A method of providing weather radar images to a user includes obtaining radar image data corresponding to a weather radar image to be displayed. The radar image data is image processed to identify a feature of the weather radar image which is potentially indicative of a hazardous weather condition. The weather radar image is displayed to the user along with a notification of the existence of the feature which is potentially indicative of the hazardous weather condition. Notification can take the form of textual information regarding the feature, including feature type and proximity information. Notification can also take the form of visually highlighting the feature, for example by forming a visual border around the feature. Other forms of notification can also be used.

  17. Genetic parameter estimates for carcass traits and visual scores including or not genomic information.

    PubMed

    Gordo, D G M; Espigolan, R; Tonussi, R L; Júnior, G A F; Bresolin, T; Magalhães, A F Braga; Feitosa, F L; Baldi, F; Carvalheiro, R; Tonhati, H; de Oliveira, H N; Chardulo, L A L; de Albuquerque, L G

    2016-05-01

    The objective of this study was to determine whether visual scores used as selection criteria in Nellore breeding programs are effective indicators of carcass traits measured after slaughter. Additionally, this study evaluated the effect of different structures of the relationship matrix ( and ) on the estimation of genetic parameters and on the prediction accuracy of breeding values. There were 13,524 animals for visual scores of conformation (CS), finishing precocity (FP), and muscling (MS) and 1,753, 1,747, and 1,564 for LM area (LMA), backfat thickness (BF), and HCW, respectively. Of these, 1,566 animals were genotyped using a high-density panel containing 777,962 SNP. Six analyses were performed using multitrait animal models, each including the 3 visual scores and 1 carcass trait. For the visual scores, the model included direct additive genetic and residual random effects and the fixed effects of contemporary group (defined by year of birth, management group at yearling, and farm) and the linear effect of age of animal at yearling. The same model was used for the carcass traits, replacing the effect of age of animal at yearling with the linear effect of age of animal at slaughter. The variance and covariance components were estimated by the REML method in analyses using the numerator relationship matrix () or combining the genomic and the numerator relationship matrices (). The heritability estimates for the visual scores obtained with the 2 methods were similar and of moderate magnitude (0.23-0.34), indicating that these traits should response to direct selection. The heritabilities for LMA, BF, and HCW were 0.13, 0.07, and 0.17, respectively, using matrix and 0.29, 0.16, and 0.23, respectively, using matrix . The genetic correlations between the visual scores and carcass traits were positive, and higher correlations were generally obtained when matrix was used. Considering the difficulties and cost of measuring carcass traits postmortem, visual scores of CS, FP, and MS could be used as selection criteria to improve HCW, BF, and LMA. The use of genomic information permitted the detection of greater additive genetic variability for LMA and BF. For HCW, the high magnitude of the genetic correlations with visual scores was probably sufficient to recover genetic variability. The methods provided similar breeding value accuracies, especially for the visual scores.

  18. Instruments of scientific visual representation in atomic databases

    NASA Astrophysics Data System (ADS)

    Kazakov, V. V.; Kazakov, V. G.; Meshkov, O. I.

    2017-10-01

    Graphic tools of spectral data representation provided by operating information systems on atomic spectroscopy—ASD NIST, VAMDC, SPECTR-W3, and Electronic Structure of Atoms—for the support of scientific-research and human-resource development are presented. Such tools of visual representation of scientific data as those of the spectrogram and Grotrian diagram plotting are considered. The possibility of comparative analysis of the experimentally obtained spectra and reference spectra of atomic systems formed according to the database of a resource is described. The access techniques to the mentioned graphic tools are presented.

  19. Visual observation of fishes and aquatic habitat [Chapter 17

    Treesearch

    Russell F. Thurow; C. Andrew Dolloff; J. Ellen Marsden

    2012-01-01

    Whether accomplished above the water surface or performed underwater by snorkel, scuba, or hookah divers or remotely operated vehicles (ROVs); direct observation techniques are among the most effective means for obtaining accurate and often unique information on aquatic organisms in their natural surroundings. Many types of studies incorporate direct observation...

  20. Improvement of design of a surgical interface using an eye tracking device

    PubMed Central

    2014-01-01

    Background Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Methods Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Results Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. Conclusions This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability. PMID:25080176

  1. Improvement of design of a surgical interface using an eye tracking device.

    PubMed

    Erol Barkana, Duygun; Açık, Alper; Duru, Dilek Goksel; Duru, Adil Deniz

    2014-05-07

    Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability.

  2. Intelligent visual localization of wireless capsule endoscopes enhanced by color information.

    PubMed

    Dimas, George; Spyrou, Evaggelos; Iakovidis, Dimitris K; Koulaouzidis, Anastasios

    2017-10-01

    Wireless capsule endoscopy (WCE) is performed with a miniature swallowable endoscope enabling the visualization of the whole gastrointestinal (GI) tract. One of the most challenging problems in WCE is the localization of the capsule endoscope (CE) within the GI lumen. Contemporary, radiation-free localization approaches are mainly based on the use of external sensors and transit time estimation techniques, with practically low localization accuracy. Latest advances for the solution of this problem include localization approaches based solely on visual information from the CE camera. In this paper we present a novel visual localization approach based on an intelligent, artificial neural network, architecture which implements a generic visual odometry (VO) framework capable of estimating the motion of the CE in physical units. Unlike the conventional, geometric, VO approaches, the proposed one is adaptive to the geometric model of the CE used; therefore, it does not require any prior knowledge about and its intrinsic parameters. Furthermore, it exploits color as a cue to increase localization accuracy and robustness. Experiments were performed using a robotic-assisted setup providing ground truth information about the actual location of the CE. The lowest average localization error achieved is 2.70 ± 1.62 cm, which is significantly lower than the error obtained with the geometric approach. This result constitutes a promising step towards the in-vivo application of VO, which will open new horizons for accurate local treatment, including drug infusion and surgical interventions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Infrared and visible image fusion based on visual saliency map and weighted least square optimization

    NASA Astrophysics Data System (ADS)

    Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua

    2017-05-01

    The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.

  4. Using electroretinograms and multi-model inference to identify spectral classes of photoreceptors and relative opsin expression levels

    PubMed Central

    2017-01-01

    Understanding how individual photoreceptor cells factor in the spectral sensitivity of a visual system is essential to explain how they contribute to the visual ecology of the animal in question. Existing methods that model the absorption of visual pigments use templates which correspond closely to data from thin cross-sections of photoreceptor cells. However, few modeling approaches use a single framework to incorporate physical parameters of real photoreceptors, which can be fused, and can form vertical tiers. Akaike’s information criterion (AICc) was used here to select absorptance models of multiple classes of photoreceptor cells that maximize information, given visual system spectral sensitivity data obtained using extracellular electroretinograms and structural parameters obtained by histological methods. This framework was first used to select among alternative hypotheses of photoreceptor number. It identified spectral classes from a range of dark-adapted visual systems which have between one and four spectral photoreceptor classes. These were the velvet worm, Principapillatus hitoyensis, the branchiopod water flea, Daphnia magna, normal humans, and humans with enhanced S-cone syndrome, a condition in which S-cone frequency is increased due to mutations in a transcription factor that controls photoreceptor expression. Data from the Asian swallowtail, Papilio xuthus, which has at least five main spectral photoreceptor classes in its compound eyes, were included to illustrate potential effects of model over-simplification on multi-model inference. The multi-model framework was then used with parameters of spectral photoreceptor classes and the structural photoreceptor array kept constant. The goal was to map relative opsin expression to visual pigment concentration. It identified relative opsin expression differences for two populations of the bluefin killifish, Lucania goodei. The modeling approach presented here will be useful in selecting the most likely alternative hypotheses of opsin-based spectral photoreceptor classes, using relative opsin expression and extracellular electroretinography. PMID:28740757

  5. Using electroretinograms and multi-model inference to identify spectral classes of photoreceptors and relative opsin expression levels.

    PubMed

    Lessios, Nicolas

    2017-01-01

    Understanding how individual photoreceptor cells factor in the spectral sensitivity of a visual system is essential to explain how they contribute to the visual ecology of the animal in question. Existing methods that model the absorption of visual pigments use templates which correspond closely to data from thin cross-sections of photoreceptor cells. However, few modeling approaches use a single framework to incorporate physical parameters of real photoreceptors, which can be fused, and can form vertical tiers. Akaike's information criterion (AIC c ) was used here to select absorptance models of multiple classes of photoreceptor cells that maximize information, given visual system spectral sensitivity data obtained using extracellular electroretinograms and structural parameters obtained by histological methods. This framework was first used to select among alternative hypotheses of photoreceptor number. It identified spectral classes from a range of dark-adapted visual systems which have between one and four spectral photoreceptor classes. These were the velvet worm, Principapillatus hitoyensis , the branchiopod water flea, Daphnia magna , normal humans, and humans with enhanced S-cone syndrome, a condition in which S-cone frequency is increased due to mutations in a transcription factor that controls photoreceptor expression. Data from the Asian swallowtail, Papilio xuthus , which has at least five main spectral photoreceptor classes in its compound eyes, were included to illustrate potential effects of model over-simplification on multi-model inference. The multi-model framework was then used with parameters of spectral photoreceptor classes and the structural photoreceptor array kept constant. The goal was to map relative opsin expression to visual pigment concentration. It identified relative opsin expression differences for two populations of the bluefin killifish, Lucania goodei . The modeling approach presented here will be useful in selecting the most likely alternative hypotheses of opsin-based spectral photoreceptor classes, using relative opsin expression and extracellular electroretinography.

  6. Visual words for lip-reading

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad B. A.; Jassim, Sabah

    2010-04-01

    In this paper, the automatic lip reading problem is investigated, and an innovative approach to providing solutions to this problem has been proposed. This new VSR approach is dependent on the signature of the word itself, which is obtained from a hybrid feature extraction method dependent on geometric, appearance, and image transform features. The proposed VSR approach is termed "visual words". The visual words approach consists of two main parts, 1) Feature extraction/selection, and 2) Visual speech feature recognition. After localizing face and lips, several visual features for the lips where extracted. Such as the height and width of the mouth, mutual information and the quality measurement between the DWT of the current ROI and the DWT of the previous ROI, the ratio of vertical to horizontal features taken from DWT of ROI, The ratio of vertical edges to horizontal edges of ROI, the appearance of the tongue and the appearance of teeth. Each spoken word is represented by 8 signals, one of each feature. Those signals maintain the dynamic of the spoken word, which contains a good portion of information. The system is then trained on these features using the KNN and DTW. This approach has been evaluated using a large database for different people, and large experiment sets. The evaluation has proved the visual words efficiency, and shown that the VSR is a speaker dependent problem.

  7. Understanding of how older adults with low vision obtain, process, and understand health information and services.

    PubMed

    Kim, Hyung Nam

    2017-10-16

    Twenty-five years after the Americans with Disabilities Act, there has still been a lack of advancement of accessibility in healthcare for people with visual impairments, particularly older adults with low vision. This study aims to advance understanding of how older adults with low vision obtain, process, and use health information and services, and to seek opportunities of information technology to support them. A convenience sample of 10 older adults with low vision participated in semi-structured phone interviews, which were audio-recorded and transcribed verbatim for analysis. Participants shared various concerns in accessing, understanding, and using health information, care services, and multimedia technologies. Two main themes and nine subthemes emerged from the analysis. Due to the concerns, older adults with low vision tended to fail to obtain the full range of all health information and services to meet their specific needs. Those with low vision still rely on residual vision such that multimedia-based information which can be useful, but it should still be designed to ensure its accessibility, usability, and understandability.

  8. Study of the human postural control system during quiet standing using detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Teresa Blázquez, M.; Anguiano, Marta; de Saavedra, Fernando Arias; Lallena, Antonio M.; Carpena, Pedro

    2009-05-01

    The detrended fluctuation analysis is used to study the behavior of different time series obtained from the trajectory of the center of pressure, the output of the activity of the human postural control system. The results suggest that these trajectories present two different regimes in their scaling properties: persistent (for high frequencies, short-range time scale) to antipersistent (for low frequencies, long-range time scale) behaviors. The similitude between the results obtained for the measurements, done with both eyes open and eyes closed, indicate either that the visual system may be disregarded by the postural control system while maintaining the quiet standing, or that the control mechanisms associated with each type of information (visual, vestibular and somatosensory) cannot be disentangled with the type of analysis performed here.

  9. Flow Visualization in Evaporating Liquid Drops and Measurement of Dynamic Contact Angles and Spreading Rate

    NASA Technical Reports Server (NTRS)

    Zhang, Neng-Li; Chao, David F.

    2001-01-01

    A new hybrid optical system, consisting of reflection-refracted shadowgraphy and top-view photography, is used to visualize flow phenomena and simultaneously measure the spreading and instant dynamic contact angle in a volatile-liquid drop on a nontransparent substrate. Thermocapillary convection in the drop, induced by evaporation, and the drop real-time profile data are synchronously recorded by video recording systems. Experimental results obtained from this unique technique clearly reveal that thermocapillary convection strongly affects the spreading process and the characteristics of dynamic contact angle of the drop. Comprehensive information of a sessile drop, including the local contact angle along the periphery, the instability of the three-phase contact line, and the deformation of the drop shape is obtained and analyzed.

  10. Interactive and coordinated visualization approaches for biological data analysis.

    PubMed

    Cruz, António; Arrais, Joel P; Machado, Penousal

    2018-03-26

    The field of computational biology has become largely dependent on data visualization tools to analyze the increasing quantities of data gathered through the use of new and growing technologies. Aside from the volume, which often results in large amounts of noise and complex relationships with no clear structure, the visualization of biological data sets is hindered by their heterogeneity, as data are obtained from different sources and contain a wide variety of attributes, including spatial and temporal information. This requires visualization approaches that are able to not only represent various data structures simultaneously but also provide exploratory methods that allow the identification of meaningful relationships that would not be perceptible through data analysis algorithms alone. In this article, we present a survey of visualization approaches applied to the analysis of biological data. We focus on graph-based visualizations and tools that use coordinated multiple views to represent high-dimensional multivariate data, in particular time series gene expression, protein-protein interaction networks and biological pathways. We then discuss how these methods can be used to help solve the current challenges surrounding the visualization of complex biological data sets.

  11. U.S. Geological Survey: A synopsis of Three-dimensional Modeling

    USGS Publications Warehouse

    Jacobsen, Linda J.; Glynn, Pierre D.; Phelps, Geoff A.; Orndorff, Randall C.; Bawden, Gerald W.; Grauch, V.J.S.

    2011-01-01

    The U.S. Geological Survey (USGS) is a multidisciplinary agency that provides assessments of natural resources (geological, hydrological, biological), the disturbances that affect those resources, and the disturbances that affect the built environment, natural landscapes, and human society. Until now, USGS map products have been generated and distributed primarily as 2-D maps, occasionally providing cross sections or overlays, but rarely allowing the ability to characterize and understand 3-D systems, how they change over time (4-D), and how they interact. And yet, technological advances in monitoring natural resources and the environment, the ever-increasing diversity of information needed for holistic assessments, and the intrinsic 3-D/4-D nature of the information obtained increases our need to generate, verify, analyze, interpret, confirm, store, and distribute its scientific information and products using 3-D/4-D visualization, analysis, modeling tools, and information frameworks. Today, USGS scientists use 3-D/4-D tools to (1) visualize and interpret geological information, (2) verify the data, and (3) verify their interpretations and models. 3-D/4-D visualization can be a powerful quality control tool in the analysis of large, multidimensional data sets. USGS scientists use 3-D/4-D technology for 3-D surface (i.e., 2.5-D) visualization as well as for 3-D volumetric analyses. Examples of geological mapping in 3-D include characterization of the subsurface for resource assessments, such as aquifer characterization in the central United States, and for input into process models, such as seismic hazards in the western United States.

  12. Evaluation of intraaxial enhancing brain tumors on magnetic resonance imaging: intraindividual crossover comparison of gadobenate dimeglumine and gadopentetate dimeglumine for visualization and assessment, and implications for surgical intervention.

    PubMed

    Kuhn, Matthew J; Picozzi, Piero; Maldjian, Joseph A; Schmalfuss, Ilona M; Maravilla, Kenneth R; Bowen, Brian C; Wippold, Franz J; Runge, Val M; Knopp, Michael V; Wolansky, Leo J; Gustafsson, Lars; Essig, Marco; Anzalone, Nicoletta

    2007-04-01

    The goal in this article was to compare 0.1 mmol/kg doses of gadobenate dimeglumine (Gd-BOPTA) and gadopentetate dimeglumine, also known as gadolinium diethylenetriamine pentaacetic acid (Gd-DTPA), for enhanced magnetic resonance (MR) imaging of intraaxial brain tumors. Eighty-four patients with either intraaxial glioma (47 patients) or metastasis (37 patients) underwent two MR imaging examinations at 1.5 tesla, one with Gd-BOPTA as the contrast agent and the other with Gd-DTPA. The interval between fully randomized contrast medium administrations was 2 to 7 days. The T1-weighted spin echo and T2-weighted fast spin echo images were acquired before administration of contrast agents and T1-weighted spin echo images were obtained after the agents were administered. Acquisition parameters and postinjection acquisition times were identical for the two examinations in each patient. Three experienced readers working in a fully blinded fashion independently evaluated all images for degree and quality of available information (lesion contrast enhancement, lesion border delineation, definition of disease extent, visualization of the lesion's internal structures, global diagnostic preference) and quantitative enhancement (that is, the extent of lesion enhancement after contrast agent administration compared with that seen before its administration [hereafter referred to as percent enhancement], lesion/brain ratio, and contrast/noise ratio). Differences were tested with the Wilcoxon signed-rank test. Reader agreement was assessed using kappa statistics. Significantly better diagnostic information/imaging performance (p < 0.0001, all readers) was obtained with Gd-BOPTA for all visualization end points. Global preference for images obtained with Gd-BOPTA was expressed for 42 (50%), 52 (61.9%), and 56 (66.7%) of 84 patients (readers 1, 2, and 3, respectively) compared with images obtained with Gd-DTPA contrast in four (4.8%), six (7.1%), and three (3.6%) of 84 patients. Similar differences were noted for all other visualization end points. Significantly greater quantitative contrast enhancement (p < 0.04) was noted after administration of Gd-BOPTA. Reader agreement was good (kappa > 0.4). Lesion visualization, delineation, definition, and contrast enhancement are significantly better after administration of 0.1 mmol/kg Gd-BOPTA, potentially allowing better surgical planning and follow up and improved disease management.

  13. Demons registration for in vivo and deformable laser scanning confocal endomicroscopy.

    PubMed

    Chiew, Wei-Ming; Lin, Feng; Seah, Hock Soon

    2017-09-01

    A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  14. Demons registration for in vivo and deformable laser scanning confocal endomicroscopy

    NASA Astrophysics Data System (ADS)

    Chiew, Wei Ming; Lin, Feng; Seah, Hock Soon

    2017-09-01

    A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry.

  15. Evaluation of the 3d Urban Modelling Capabilities in Geographical Information Systems

    NASA Astrophysics Data System (ADS)

    Dogru, A. O.; Seker, D. Z.

    2010-12-01

    Geographical Information System (GIS) Technology, which provides successful solutions to basic spatial problems, is currently widely used in 3 dimensional (3D) modeling of physical reality with its developing visualization tools. The modeling of large and complicated phenomenon is a challenging problem in terms of computer graphics currently in use. However, it is possible to visualize that phenomenon in 3D by using computer systems. 3D models are used in developing computer games, military training, urban planning, tourism and etc. The use of 3D models for planning and management of urban areas is very popular issue of city administrations. In this context, 3D City models are produced and used for various purposes. However the requirements of the models vary depending on the type and scope of the application. While a high level visualization, where photorealistic visualization techniques are widely used, is required for touristy and recreational purposes, an abstract visualization of the physical reality is generally sufficient for the communication of the thematic information. The visual variables, which are the principle components of cartographic visualization, such as: color, shape, pattern, orientation, size, position, and saturation are used for communicating the thematic information. These kinds of 3D city models are called as abstract models. Standardization of technologies used for 3D modeling is now available by the use of CityGML. CityGML implements several novel concepts to support interoperability, consistency and functionality. For example it supports different Levels-of-Detail (LoD), which may arise from independent data collection processes and are used for efficient visualization and efficient data analysis. In one CityGML data set, the same object may be represented in different LoD simultaneously, enabling the analysis and visualization of the same object with regard to different degrees of resolution. Furthermore, two CityGML data sets containing the same object in different LoD may be combined and integrated. In this study GIS tools used for 3D modeling issues were examined. In this context, the availability of the GIS tools for obtaining different LoDs of CityGML standard. Additionally a 3D GIS application that covers a small part of the city of Istanbul was implemented for communicating the thematic information rather than photorealistic visualization by using 3D model. An abstract model was created by using a commercial GIS software modeling tools and the results of the implementation were also presented in the study.

  16. Luminance- and Texture-Defined Information Processing in School-Aged Children with Autism

    PubMed Central

    Rivest, Jessica B.; Jemel, Boutheina; Bertone, Armando; McKerral, Michelle; Mottron, Laurent

    2013-01-01

    According to the complexity-specific hypothesis, the efficacy with which individuals with autism spectrum disorder (ASD) process visual information varies according to the extensiveness of the neural network required to process stimuli. Specifically, adults with ASD are less sensitive to texture-defined (or second-order) information, which necessitates the implication of several cortical visual areas. Conversely, the sensitivity to simple, luminance-defined (or first-order) information, which mainly relies on primary visual cortex (V1) activity, has been found to be either superior (static material) or intact (dynamic material) in ASD. It is currently unknown if these autistic perceptual alterations are present in childhood. In the present study, behavioural (threshold) and electrophysiological measures were obtained for static luminance- and texture-defined gratings presented to school-aged children with ASD and compared to those of typically developing children. Our behavioural and electrophysiological (P140) results indicate that luminance processing is likely unremarkable in autistic children. With respect to texture processing, there was no significant threshold difference between groups. However, unlike typical children, autistic children did not show reliable enhancements of brain activity (N230 and P340) in response to texture-defined gratings relative to luminance-defined gratings. This suggests reduced efficiency of neuro-integrative mechanisms operating at a perceptual level in autism. These results are in line with the idea that visual atypicalities mediated by intermediate-scale neural networks emerge before or during the school-age period in autism. PMID:24205355

  17. Luminance- and texture-defined information processing in school-aged children with autism.

    PubMed

    Rivest, Jessica B; Jemel, Boutheina; Bertone, Armando; McKerral, Michelle; Mottron, Laurent

    2013-01-01

    According to the complexity-specific hypothesis, the efficacy with which individuals with autism spectrum disorder (ASD) process visual information varies according to the extensiveness of the neural network required to process stimuli. Specifically, adults with ASD are less sensitive to texture-defined (or second-order) information, which necessitates the implication of several cortical visual areas. Conversely, the sensitivity to simple, luminance-defined (or first-order) information, which mainly relies on primary visual cortex (V1) activity, has been found to be either superior (static material) or intact (dynamic material) in ASD. It is currently unknown if these autistic perceptual alterations are present in childhood. In the present study, behavioural (threshold) and electrophysiological measures were obtained for static luminance- and texture-defined gratings presented to school-aged children with ASD and compared to those of typically developing children. Our behavioural and electrophysiological (P140) results indicate that luminance processing is likely unremarkable in autistic children. With respect to texture processing, there was no significant threshold difference between groups. However, unlike typical children, autistic children did not show reliable enhancements of brain activity (N230 and P340) in response to texture-defined gratings relative to luminance-defined gratings. This suggests reduced efficiency of neuro-integrative mechanisms operating at a perceptual level in autism. These results are in line with the idea that visual atypicalities mediated by intermediate-scale neural networks emerge before or during the school-age period in autism.

  18. VISIBIOweb: visualization and layout services for BioPAX pathway models

    PubMed Central

    Dilek, Alptug; Belviranli, Mehmet E.; Dogrusoz, Ugur

    2010-01-01

    With recent advancements in techniques for cellular data acquisition, information on cellular processes has been increasing at a dramatic rate. Visualization is critical to analyzing and interpreting complex information; representing cellular processes or pathways is no exception. VISIBIOweb is a free, open-source, web-based pathway visualization and layout service for pathway models in BioPAX format. With VISIBIOweb, one can obtain well-laid-out views of pathway models using the standard notation of the Systems Biology Graphical Notation (SBGN), and can embed such views within one's web pages as desired. Pathway views may be navigated using zoom and scroll tools; pathway object properties, including any external database references available in the data, may be inspected interactively. The automatic layout component of VISIBIOweb may also be accessed programmatically from other tools using Hypertext Transfer Protocol (HTTP). The web site is free and open to all users and there is no login requirement. It is available at: http://visibioweb.patika.org. PMID:20460470

  19. Audio-visual presentation of information for informed consent for participation in clinical trials.

    PubMed

    Ryan, R E; Prictor, M J; McLaughlin, K J; Hill, S J

    2008-01-23

    Informed consent is a critical component of clinical research. Different methods of presenting information to potential participants of clinical trials may improve the informed consent process. Audio-visual interventions (presented for example on the Internet, DVD, or video cassette) are one such method. To assess the effects of providing audio-visual information alone, or in conjunction with standard forms of information provision, to potential clinical trial participants in the informed consent process, in terms of their satisfaction, understanding and recall of information about the study, level of anxiety and their decision whether or not to participate. We searched: the Cochrane Consumers and Communication Review Group Specialised Register (searched 20 June 2006); the Cochrane Central Register of Controlled Trials (CENTRAL), The Cochrane Library, issue 2, 2006; MEDLINE (Ovid) (1966 to June week 1 2006); EMBASE (Ovid) (1988 to 2006 week 24); and other databases. We also searched reference lists of included studies and relevant review articles, and contacted study authors and experts. There were no language restrictions. Randomised and quasi-randomised controlled trials comparing audio-visual information alone, or in conjunction with standard forms of information provision (such as written or oral information as usually employed in the particular service setting), with standard forms of information provision alone, in the informed consent process for clinical trials. Trials involved individuals or their guardians asked to participate in a real (not hypothetical) clinical study. Two authors independently assessed studies for inclusion and extracted data. Due to heterogeneity no meta-analysis was possible; we present the findings in a narrative review. We included 4 trials involving data from 511 people. Studies were set in the USA and Canada. Three were randomised controlled trials (RCTs) and the fourth a quasi-randomised trial. Their quality was mixed and results should be interpreted with caution. Considerable uncertainty remains about the effects of audio-visual interventions, compared with standard forms of information provision (such as written or oral information normally used in the particular setting), for use in the process of obtaining informed consent for clinical trials. Audio-visual interventions did not consistently increase participants' levels of knowledge/understanding (assessed in four studies), although one study showed better retention of knowledge amongst intervention recipients. An audio-visual intervention may transiently increase people's willingness to participate in trials (one study), but this was not sustained at two to four weeks post-intervention. Perceived worth of the trial did not appear to be influenced by an audio-visual intervention (one study), but another study suggested that the quality of information disclosed may be enhanced by an audio-visual intervention. Many relevant outcomes including harms were not measured. The heterogeneity in results may reflect the differences in intervention design, content and delivery, the populations studied and the diverse methods of outcome assessment in included studies. The value of audio-visual interventions for people considering participating in clinical trials remains unclear. Evidence is mixed as to whether audio-visual interventions enhance people's knowledge of the trial they are considering entering, and/or the health condition the trial is designed to address; one study showed improved retention of knowledge amongst intervention recipients. The intervention may also have small positive effects on the quality of information disclosed, and may increase willingness to participate in the short-term; however the evidence is weak. There were no data for several primary outcomes, including harms. In the absence of clear results, triallists should continue to explore innovative methods of providing information to potential trial participants. Further research should take the form of high-quality randomised controlled trials, with clear reporting of methods. Studies should conduct content assessment of audio-visual and other innovative interventions for people of differing levels of understanding and education; also for different age and cultural groups. Researchers should assess systematically the effects of different intervention components and delivery characteristics, and should involve consumers in intervention development. Studies should assess additional outcomes relevant to individuals' decisional capacity, using validated tools, including satisfaction; anxiety; and adherence to the subsequent trial protocol.

  20. Task–Technology Fit of Video Telehealth for Nurses in an Outpatient Clinic Setting

    PubMed Central

    Finkelstein, Stanley M.

    2014-01-01

    Abstract Background: Incorporating telehealth into outpatient care delivery supports management of consumer health between clinic visits. Task–technology fit is a framework for understanding how technology helps and/or hinders a person during work processes. Evaluating the task–technology fit of video telehealth for personnel working in a pediatric outpatient clinic and providing care between clinic visits ensures the information provided matches the information needed to support work processes. Materials and Methods: The workflow of advanced practice registered nurse (APRN) care coordination provided via telephone and video telehealth was described and measured using a mixed-methods workflow analysis protocol that incorporated cognitive ethnography and time–motion study. Qualitative and quantitative results were merged and analyzed within the task–technology fit framework to determine the workflow fit of video telehealth for APRN care coordination. Results: Incorporating video telehealth into APRN care coordination workflow provided visual information unavailable during telephone interactions. Despite additional tasks and interactions needed to obtain the visual information, APRN workflow efficiency, as measured by time, was not significantly changed. Analyzed within the task–technology fit framework, the increased visual information afforded by video telehealth supported the assessment and diagnostic information needs of the APRN. Conclusions: Telehealth must provide the right information to the right clinician at the right time. Evaluating task–technology fit using a mixed-methods protocol ensured rigorous analysis of fit within work processes and identified workflows that benefit most from the technology. PMID:24841219

  1. Bayesian networks and information theory for audio-visual perception modeling.

    PubMed

    Besson, Patricia; Richiardi, Jonas; Bourdin, Christophe; Bringoux, Lionel; Mestre, Daniel R; Vercher, Jean-Louis

    2010-09-01

    Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.

  2. Advanced Multidimensional Separations in Mass Spectrometry: Navigating the Big Data Deluge

    PubMed Central

    May, Jody C.; McLean, John A.

    2017-01-01

    Hybrid analytical instrumentation constructed around mass spectrometry (MS) are becoming preferred techniques for addressing many grand challenges in science and medicine. From the omics sciences to drug discovery and synthetic biology, multidimensional separations based on MS provide the high peak capacity and high measurement throughput necessary to obtain large-scale measurements which are used to infer systems-level information. In this review, we describe multidimensional MS configurations as technologies which are big data drivers and discuss some new and emerging strategies for mining information from large-scale datasets. A discussion is included on the information content which can be obtained from individual dimensions, as well as the unique information which can be derived by comparing different levels of data. Finally, we discuss some emerging data visualization strategies which seek to make highly dimensional datasets both accessible and comprehensible. PMID:27306312

  3. Spatial vision in older adults: perceptual changes and neural bases.

    PubMed

    McKendrick, Allison M; Chan, Yu Man; Nguyen, Bao N

    2018-05-17

    The number of older adults is rapidly increasing internationally, leading to a significant increase in research on how healthy ageing impacts vision. Most clinical assessments of spatial vision involve simple detection (letter acuity, grating contrast sensitivity, perimetry). However, most natural visual environments are more spatially complicated, requiring contrast discrimination, and the delineation of object boundaries and contours, which are typically present on non-uniform backgrounds. In this review we discuss recent research that reports on the effects of normal ageing on these more complex visual functions, specifically in the context of recent neurophysiological studies. Recent research has concentrated on understanding the effects of healthy ageing on neural responses within the visual pathway in animal models. Such neurophysiological research has led to numerous, subsequently tested, hypotheses regarding the likely impact of healthy human ageing on specific aspects of spatial vision. Healthy normal ageing impacts significantly on spatial visual information processing from the retina through to visual cortex. Some human data validates that obtained from studies of animal physiology, however some findings indicate that rethinking of presumed neural substrates is required. Notably, not all spatial visual processes are altered by age. Healthy normal ageing impacts significantly on some spatial visual processes (in particular centre-surround tasks), but leaves contrast discrimination, contrast adaptation, and orientation discrimination relatively intact. The study of older adult vision contributes to knowledge of the brain mechanisms altered by the ageing process, can provide practical information regarding visual environments that older adults may find challenging, and may lead to new methods of assessing visual performance in clinical environments. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.

  4. Current food chain information provides insufficient information for modern meat inspection of pigs.

    PubMed

    Felin, Elina; Jukola, Elias; Raulo, Saara; Heinonen, Jaakko; Fredriksson-Ahomaa, Maria

    2016-05-01

    Meat inspection now incorporates a more risk-based approach for protecting human health against meat-borne biological hazards. Official post-mortem meat inspection of pigs has shifted to visual meat inspection. The official veterinarian decides on additional post-mortem inspection procedures, such as incisions and palpations. The decision is based on declarations in the food chain information (FCI), ante-mortem inspection and post-mortem inspection. However, a smooth slaughter and inspection process is essential. Therefore, one should be able to assess prior to slaughter which pigs are suitable for visual meat inspection only, and which need more profound inspection procedures. This study evaluates the usability of the FCI provided by pig producers and considered the possibility for risk ranking of incoming slaughter batches according to the previous meat inspection data and the current FCI. Eighty-five slaughter batches comprising 8954 fattening pigs were randomly selected at a slaughterhouse that receives animals from across Finland. The mortality rate, the FCI and the meat inspection results for each batch were obtained. The current FCI alone provided insufficient and inaccurate information for risk ranking purposes for meat inspection. The partial condemnation rate for a batch was best predicted by the partial condemnation rate calculated for all the pigs sent for slaughter from the same holding in the previous year (p<0.001) and by prior information on cough declared in the current FCI (p=0.02) statement. Training and information to producers are needed to make the FCI reporting procedures more accurate. Historical meat inspection data on pigs slaughtered from the same holdings and well-chosen symptoms/signs for reporting, should be included in the FCI to facilitate the allocation of pigs for visual inspection. The introduced simple scoring system can be easily used for additional information for directing batches to appropriate meat inspection procedures. To control the main biological public health hazards related to pork, serological surveillance should be done and the information obtained from analyses should be used as part of the FCI. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Evaluating and communicating options for harvesting young-growth douglas-fir forests

    Treesearch

    Dean S. DeBell; Jeffery D. DeBell; Robert O. Curtis; Nancy K. Allison

    1997-01-01

    A cooperative project, developed by Washington State Department of Natural Resources (DNR) and the Pacific Northwest Research Station (PNW), provides a framework for managers and scientists to (1) obtain experience with a range of silvicultural options; (2) develop information about public response to visual appearance, economic performance, and biological aspects...

  6. Near-Infrared Spectroscopy of Himalia An Irregular Jovian Satellite

    NASA Technical Reports Server (NTRS)

    Brown, R. H.; Baines, K.; Bellucci, G.; Bibring, J.-P.; Buratti, B.; Capaccioni, F.; Cerroni, P.; Clark, R.; Coradini, A.; Cruikshank, D.

    2002-01-01

    Spectra of the irregular Jovian satellite Himalia were obtained with the Visual and Infrared Mapping Spectrometer (VIMS) onboard Cassini during the Jupiter Flyby on December 18-19, 2000. These are the first spectral data of an irregular satellite beyond 2.5 microns. Additional information is contained in the original extended abstract.

  7. Windows to the soul: vision science as a tool for studying biological mechanisms of information processing deficits in schizophrenia.

    PubMed

    Yoon, Jong H; Sheremata, Summer L; Rokem, Ariel; Silver, Michael A

    2013-10-31

    Cognitive and information processing deficits are core features and important sources of disability in schizophrenia. Our understanding of the neural substrates of these deficits remains incomplete, in large part because the complexity of impairments in schizophrenia makes the identification of specific deficits very challenging. Vision science presents unique opportunities in this regard: many years of basic research have led to detailed characterization of relationships between structure and function in the early visual system and have produced sophisticated methods to quantify visual perception and characterize its neural substrates. We present a selective review of research that illustrates the opportunities for discovery provided by visual studies in schizophrenia. We highlight work that has been particularly effective in applying vision science methods to identify specific neural abnormalities underlying information processing deficits in schizophrenia. In addition, we describe studies that have utilized psychophysical experimental designs that mitigate generalized deficit confounds, thereby revealing specific visual impairments in schizophrenia. These studies contribute to accumulating evidence that early visual cortex is a useful experimental system for the study of local cortical circuit abnormalities in schizophrenia. The high degree of similarity across neocortical areas of neuronal subtypes and their patterns of connectivity suggests that insights obtained from the study of early visual cortex may be applicable to other brain regions. We conclude with a discussion of future studies that combine vision science and neuroimaging methods. These studies have the potential to address pressing questions in schizophrenia, including the dissociation of local circuit deficits vs. impairments in feedback modulation by cognitive processes such as spatial attention and working memory, and the relative contributions of glutamatergic and GABAergic deficits.

  8. Playing the electric light orchestra—how electrical stimulation of visual cortex elucidates the neural basis of perception

    PubMed Central

    Cicmil, Nela; Krug, Kristine

    2015-01-01

    Vision research has the potential to reveal fundamental mechanisms underlying sensory experience. Causal experimental approaches, such as electrical microstimulation, provide a unique opportunity to test the direct contributions of visual cortical neurons to perception and behaviour. But in spite of their importance, causal methods constitute a minority of the experiments used to investigate the visual cortex to date. We reconsider the function and organization of visual cortex according to results obtained from stimulation techniques, with a special emphasis on electrical stimulation of small groups of cells in awake subjects who can report their visual experience. We compare findings from humans and monkeys, striate and extrastriate cortex, and superficial versus deep cortical layers, and identify a number of revealing gaps in the ‘causal map′ of visual cortex. Integrating results from different methods and species, we provide a critical overview of the ways in which causal approaches have been used to further our understanding of circuitry, plasticity and information integration in visual cortex. Electrical stimulation not only elucidates the contributions of different visual areas to perception, but also contributes to our understanding of neuronal mechanisms underlying memory, attention and decision-making. PMID:26240421

  9. Visual cueing considerations in Nap-of-the-Earth helicopter flight head-slaved helmet-mounted displays

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Kohn, Silvia

    1993-01-01

    The pilot's ability to derive Control-Oriented Visual Field Information from teleoperated Helmet-Mounted displays in Nap-of-the-Earth flight, is investigated. The visual field with these types of displays, commonly used in Apache and Cobra helicopter night operations, originates from a relatively narrow field-of-view Forward Looking Infrared Radiation Camera, gimbal-mounted at the nose of the aircraft and slaved to the pilot's line-of-sight, in order to obtain a wide-angle field-of-regard. Pilots have encountered considerable difficulties in controlling the aircraft by these devices. Experimental simulator results presented here indicate that part of these difficulties can be attributed to head/camera slaving system phase lags and errors. In the presence of voluntary head rotation, these slaving system imperfections are shown to impair the Control-Oriented Visual Field Information vital in vehicular control, such as the perception of the anticipated flight path or the vehicle yaw rate. Since, in the presence of slaving system imperfections, the pilot will tend to minimize head rotation, the full wide-angle field-of-regard of the line-of-sight slaved Helmet-Mounted Display, is not always fully utilized.

  10. Prestimulus neural oscillations inhibit visual perception via modulation of response gain.

    PubMed

    Chaumon, Maximilien; Busch, Niko A

    2014-11-01

    The ongoing state of the brain radically affects how it processes sensory information. How does this ongoing brain activity interact with the processing of external stimuli? Spontaneous oscillations in the alpha range are thought to inhibit sensory processing, but little is known about the psychophysical mechanisms of this inhibition. We recorded ongoing brain activity with EEG while human observers performed a visual detection task with stimuli of different contrast intensities. To move beyond qualitative description, we formally compared psychometric functions obtained under different levels of ongoing alpha power and evaluated the inhibitory effect of ongoing alpha oscillations in terms of contrast or response gain models. This procedure opens the way to understanding the actual functional mechanisms by which ongoing brain activity affects visual performance. We found that strong prestimulus occipital alpha oscillations-but not more anterior mu oscillations-reduce performance most strongly for stimuli of the highest intensities tested. This inhibitory effect is best explained by a divisive reduction of response gain. Ongoing occipital alpha oscillations thus reflect changes in the visual system's input/output transformation that are independent of the sensory input to the system. They selectively scale the system's response, rather than change its sensitivity to sensory information.

  11. MPEG-4 AVC saliency map computation

    NASA Astrophysics Data System (ADS)

    Ammar, M.; Mitrea, M.; Hasnaoui, M.

    2014-02-01

    A saliency map provides information about the regions inside some visual content (image, video, ...) at which a human observer will spontaneously look at. For saliency maps computation, current research studies consider the uncompressed (pixel) representation of the visual content and extract various types of information (intensity, color, orientation, motion energy) which are then fusioned. This paper goes one step further and computes the saliency map directly from the MPEG-4 AVC stream syntax elements with minimal decoding operations. In this respect, an a-priori in-depth study on the MPEG-4 AVC syntax elements is first carried out so as to identify the entities appealing the visual attention. Secondly, the MPEG-4 AVC reference software is completed with software tools allowing the parsing of these elements and their subsequent usage in objective benchmarking experiments. This way, it is demonstrated that an MPEG-4 saliency map can be given by a combination of static saliency and motion maps. This saliency map is experimentally validated under a robust watermarking framework. When included in an m-QIM (multiple symbols Quantization Index Modulation) insertion method, PSNR average gains of 2.43 dB, 2.15dB, and 2.37 dB are obtained for data payload of 10, 20 and 30 watermarked blocks per I frame, i.e. about 30, 60, and 90 bits/second, respectively. These quantitative results are obtained out of processing 2 hours of heterogeneous video content.

  12. 3D Virtual Environment Used to Support Lighting System Management in a Building

    NASA Astrophysics Data System (ADS)

    Sampaio, A. Z.; Ferreira, M. M.; Rosário, D. P.

    The main aim of the research project, which is in progress at the UTL, is to develop a virtual interactive model as a tool to support decision-making in the planning of construction maintenance and facilities management. The virtual model gives the capacity to allow the user to transmit, visually and interactively, information related to the components of a building, defined as a function of the time variable. In addition, the analysis of solutions for repair work/substitution and inherent cost are predicted, the results being obtained interactively and visualized in the virtual environment itself. The first component of the virtual prototype concerns the management of lamps in a lighting system. It was applied in a study case. The interactive application allows the examination of the physical model, visualizing, for each element modeled in 3D and linked to a database, the corresponding technical information concerned with the use of the material, calculated for different points in time during their life. The control of a lamp stock, the constant updating of lifetime information and the planning of periodical local inspections are attended on the prototype. This is an important mean of cooperation between collaborators involved in the building management.

  13. People-oriented Information Visualization Design

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyong; Zhang, Bolun

    2018-04-01

    In the 21st century with rapid development, in the wake of the continuous progress of science and technology, human society enters the information era and the era of big data, and the lifestyle and aesthetic system also change accordingly, so the emerging field of information visualization is increasingly popular. Information visualization design is the process of visualizing all kinds of tedious information data, so as to quickly accept information and save time-cost. Along with the development of the process of information visualization, information design, also becomes hotter and hotter, and emotional design, people-oriented design is an indispensable part of in the design of information. This paper probes information visualization design through emotional analysis of information design based on the social context of people-oriented experience from the perspective of art design. Based on the three levels of emotional information design: instinct level, behavior level and reflective level research, to explore and discuss information visualization design.

  14. A Weld Position Recognition Method Based on Directional and Structured Light Information Fusion in Multi-Layer/Multi-Pass Welding.

    PubMed

    Zeng, Jinle; Chang, Baohua; Du, Dong; Wang, Li; Chang, Shuhe; Peng, Guodong; Wang, Wenzhu

    2018-01-05

    Multi-layer/multi-pass welding (MLMPW) technology is widely used in the energy industry to join thick components. During automatic welding using robots or other actuators, it is very important to recognize the actual weld pass position using visual methods, which can then be used not only to perform reasonable path planning for actuators, but also to correct any deviations between the welding torch and the weld pass position in real time. However, due to the small geometrical differences between adjacent weld passes, existing weld position recognition technologies such as structured light methods are not suitable for weld position detection in MLMPW. This paper proposes a novel method for weld position detection, which fuses various kinds of information in MLMPW. First, a synchronous acquisition method is developed to obtain various kinds of visual information when directional light and structured light sources are on, respectively. Then, interferences are eliminated by fusing adjacent images. Finally, the information from directional and structured light images is fused to obtain the 3D positions of the weld passes. Experiment results show that each process can be done in 30 ms and the deviation is less than 0.6 mm. The proposed method can be used for automatic path planning and seam tracking in the robotic MLMPW process as well as electron beam freeform fabrication process.

  15. The contribution of visual and vestibular information to spatial orientation by 6- to 14-month-old infants and adults.

    PubMed

    Bremner, J Gavin; Hatton, Fran; Foster, Kirsty A; Mason, Uschi

    2011-09-01

    Although there is much research on infants' ability to orient in space, little is known regarding the information they use to do so. This research uses a rotating room to evaluate the relative contribution of visual and vestibular information to location of a target following bodily rotation. Adults responded precisely on the basis of visual flow information. Seven-month-olds responded mostly on the basis of visual flow, whereas 9-month-olds responded mostly on the basis of vestibular information, and 12-month-olds responded mostly on the basis of visual information. Unlike adults, infants of all ages showed partial influence by both modalities. Additionally, 7-month-olds were capable of using vestibular information when there was no visual information for movement or stability, and 9-month-olds still relied on vestibular information when visual information was enhanced. These results are discussed in the context of neuroscientific evidence regarding visual-vestibular interaction, and in relation to possible changes in reliance on visual and vestibular information following acquisition of locomotion. © 2011 Blackwell Publishing Ltd.

  16. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  17. Cognitive workload modulation through degraded visual stimuli: a single-trial EEG study

    NASA Astrophysics Data System (ADS)

    Yu, K.; Prasad, I.; Mir, H.; Thakor, N.; Al-Nashash, H.

    2015-08-01

    Objective. Our experiments explored the effect of visual stimuli degradation on cognitive workload. Approach. We investigated the subjective assessment, event-related potentials (ERPs) as well as electroencephalogram (EEG) as measures of cognitive workload. Main results. These experiments confirm that degradation of visual stimuli increases cognitive workload as assessed by subjective NASA task load index and confirmed by the observed P300 amplitude attenuation. Furthermore, the single-trial multi-level classification using features extracted from ERPs and EEG is found to be promising. Specifically, the adopted single-trial oscillatory EEG/ERP detection method achieved an average accuracy of 85% for discriminating 4 workload levels. Additionally, we found from the spatial patterns obtained from EEG signals that the frontal parts carry information that can be used for differentiating workload levels. Significance. Our results show that visual stimuli can modulate cognitive workload, and the modulation can be measured by the single trial EEG/ERP detection method.

  18. Immunological multimetal deposition for rapid visualization of sweat fingerprints.

    PubMed

    He, Yayun; Xu, Linru; Zhu, Yu; Wei, Qianhui; Zhang, Meiqin; Su, Bin

    2014-11-10

    A simple method termed immunological multimetal deposition (iMMD) was developed for rapid visualization of sweat fingerprints with bare eyes, by combining the conventional MMD with the immunoassay technique. In this approach, antibody-conjugated gold nanoparticles (AuNPs) were used to specifically interact with the corresponding antigens in the fingerprint residue. The AuNPs serve as the nucleation sites for autometallographic deposition of silver particles from the silver staining solution, generating a dark ridge pattern for visual detection. Using fingerprints inked with human immunoglobulin G (hIgG), we obtained the optimal formulation of iMMD, which was then successfully applied to visualize sweat fingerprints through the detection of two secreted polypeptides, epidermal growth factor and lysozyme. In comparison with the conventional MMD, iMMD is faster and can provide additional information than just identification. Moreover, iMMD is facile and does not need expensive instruments. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Uncertainties have a meaning: Information entropy as a quality measure for 3-D geological models

    NASA Astrophysics Data System (ADS)

    Wellmann, J. Florian; Regenauer-Lieb, Klaus

    2012-03-01

    Analyzing, visualizing and communicating uncertainties are important issues as geological models can never be fully determined. To date, there exists no general approach to quantify uncertainties in geological modeling. We propose here to use information entropy as an objective measure to compare and evaluate model and observational results. Information entropy was introduced in the 50s and defines a scalar value at every location in the model for predictability. We show that this method not only provides a quantitative insight into model uncertainties but, due to the underlying concept of information entropy, can be related to questions of data integration (i.e. how is the model quality interconnected with the used input data) and model evolution (i.e. does new data - or a changed geological hypothesis - optimize the model). In other words information entropy is a powerful measure to be used for data assimilation and inversion. As a first test of feasibility, we present the application of the new method to the visualization of uncertainties in geological models, here understood as structural representations of the subsurface. Applying the concept of information entropy on a suite of simulated models, we can clearly identify (a) uncertain regions within the model, even for complex geometries; (b) the overall uncertainty of a geological unit, which is, for example, of great relevance in any type of resource estimation; (c) a mean entropy for the whole model, important to track model changes with one overall measure. These results cannot easily be obtained with existing standard methods. The results suggest that information entropy is a powerful method to visualize uncertainties in geological models, and to classify the indefiniteness of single units and the mean entropy of a model quantitatively. Due to the relationship of this measure to the missing information, we expect the method to have a great potential in many types of geoscientific data assimilation problems — beyond pure visualization.

  20. McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech Stimuli (OLAVS).

    PubMed

    Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan

    2017-06-01

    The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.

  1. The influence of selective attention to auditory and visual speech on the integration of audiovisual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2011-01-01

    Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.

  2. Solid shape discrimination from vision and haptics: natural objects (Capsicum annuum) and Gibson's "feelies".

    PubMed

    Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia

    2012-10-01

    A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.

  3. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1.

    PubMed

    Songnian, Zhao; Qi, Zou; Chang, Liu; Xuemin, Liu; Shousi, Sun; Jun, Qiu

    2014-04-23

    How it is possible to "faithfully" represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway's optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. 1. We introduce two different mathematical expressions of the plenoptic functions, Pw and Pv that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene.2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex.3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line.

  4. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1

    PubMed Central

    2014-01-01

    Background How it is possible to “faithfully” represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. Results The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway’s optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. Conclusions 1. We introduce two different mathematical expressions of the plenoptic functions, P w and P v that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene. 2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex. 3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line. PMID:24755246

  5. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  6. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  7. Design and evaluation of a kitchen for persons with visual impairments.

    PubMed

    Kutintara, Benjamas; Somboon, Pornpun; Buasri, Virajada; Srettananurak, Metinee; Jedeeyod, Piyanooch; Pornpratoom, Kittikan; Iam-cham, Veraya

    2013-03-01

    Visually impaired people need skills on daily living, such as cooking, and Ratchasuda College offers independent living training for them. In order to fulfill their needs, a suitable kitchen should be designed with the consideration of their limitations. The objective of this study was to design and evaluate a kitchen for persons with visual impairments. Before designing the kitchen, interviews and an observation were carried out to obtain information on the needs of blind and low vision persons. Consequently, a kitchen model was developed, and it was evaluated by 10 persons with visual impairments. After the design improvement, the kitchen was built and has been routinely used for training persons with visual impairments to prepare meals. Finally, a post-occupancy evaluation of the kitchen was conducted by observing and interviewing both trainers and those with visual impairments during the food preparation training. The results of the study indicated that kitchens for persons with visual impairments should have safety and usability features. The results of the post-occupancy evaluation showed that those who attended cooking courses were able to cook safely in the kitchen. However, the kitchen still had limitations in some features.

  8. Time-dependent transition density matrix for visualizing charge-transfer excitations in photoexcited organic donor-acceptor systems

    NASA Astrophysics Data System (ADS)

    Li, Yonghui; Ullrich, Carsten

    2013-03-01

    The time-dependent transition density matrix (TDM) is a useful tool to visualize and interpret the induced charges and electron-hole coherences of excitonic processes in large molecules. Combined with time-dependent density functional theory on a real-space grid (as implemented in the octopus code), the TDM is a computationally viable visualization tool for optical excitation processes in molecules. It provides real-time maps of particles and holes which gives information on excitations, in particular those that have charge-transfer character, that cannot be obtained from the density alone. Some illustration of the TDM and comparison with standard density difference plots will be shown for photoexcited organic donor-acceptor molecules. This work is supported by NSF Grant DMR-1005651

  9. Feature extraction inspired by V1 in visual cortex

    NASA Astrophysics Data System (ADS)

    Lv, Chao; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Xin, Peng; Zhu, Mingning; Ma, Hongqiang

    2018-04-01

    Target feature extraction plays an important role in pattern recognition. It is the most complicated activity in the brain mechanism of biological vision. Inspired by high properties of primary visual cortex (V1) in extracting dynamic and static features, a visual perception model was raised. Firstly, 28 spatial-temporal filters with different orientations, half-squaring operation and divisive normalization were adopted to obtain the responses of V1 simple cells; then, an adjustable parameter was added to the output weight so that the response of complex cells was got. Experimental results indicate that the proposed V1 model can perceive motion information well. Besides, it has a good edge detection capability. The model inspired by V1 has good performance in feature extraction and effectively combines brain-inspired intelligence with computer vision.

  10. Wetlands delineation by spectral signature analysis and legal implications

    NASA Technical Reports Server (NTRS)

    Anderon, R. R.; Carter, V.

    1972-01-01

    High altitude analysis of wetland resources and the use of such information in an operational mode to address specific problems of wetland preservation at a state level are discussed. Work efforts were directed toward: (1) developing techniques for using large scale color IR photography in state wetlands mapping program, (2) developing methods for obtaining wetlands ecology information from high altitude photography, (3) developing means by which spectral data can be more accurately analyzed visually, and (4) developing spectral data for automatic mapping of wetlands.

  11. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    PubMed Central

    Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.

    2017-01-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  12. Spectral discrimination in color blind animals via chromatic aberration and pupil shape.

    PubMed

    Stubbs, Alexander L; Stubbs, Christopher W

    2016-07-19

    We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide "color-blind" animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. We quantitatively show, through numerical simulations, how chromatic aberration can be exploited to obtain spectral information, especially through nonaxial pupils that are characteristic of coleoid cephalopods. We have also assessed the inherent ambiguity between range and color that is a consequence of the chromatic variation of best focus with wavelength. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.

  13. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  14. Correlation between multispectral photography and near-surface turbidities

    NASA Technical Reports Server (NTRS)

    Wertz, D. L.; Mealor, W. T.; Steele, M. L.; Pinson, J. W.

    1976-01-01

    Four-band multispectral photography obtained from an aerial platform at an altitude of about 10,000 feet has been utilized to measure near-surface turbidity at numerous sampling sites in the Ross Barnett Reservoir, Mississippi. Correlation of the photographs with turbidity measurements has been accomplished via an empirical mathematical model which depends upon visual color recognition when the composited photographs are examined on either an I squared S model 600 or a Spectral Data model 65 color-additive viewer. The mathematical model was developed utilizing least-squares, iterative, and standard statistical methods and includes a time-dependent term related to sun angle. This model is consistent with information obtained from two overflights of the target area - July 30, 1973 and October 30, 1973 - and now is being evaluated with regard to information obtained from a third overflight on November 8, 1974.

  15. Profiling Oman education data using data visualization technique

    NASA Astrophysics Data System (ADS)

    Alalawi, Sultan Juma Sultan; Shaharanee, Izwan Nizal Mohd; Jamil, Jastini Mohd

    2016-10-01

    This research works presents an innovative data visualization technique to understand and visualize the information of Oman's education data generated from the Ministry of Education Oman "Educational Portal". The Ministry of Education in Sultanate of Oman have huge databases contains massive information. The volume of data in the database increase yearly as many students, teachers and employees enter into the database. The task for discovering and analyzing these vast volumes of data becomes increasingly difficult. Information visualization and data mining offer a better ways in dealing with large volume of information. In this paper, an innovative information visualization technique is developed to visualize the complex multidimensional educational data. Microsoft Excel Dashboard, Visual Basic Application (VBA) and Pivot Table are utilized to visualize the data. Findings from the summarization of the data are presented, and it is argued that information visualization can help related stakeholders to become aware of hidden and interesting information from large amount of data drowning in their educational portal.

  16. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    PubMed

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  17. Images of climate change in the news: Visual framing of a global environmental issue

    NASA Astrophysics Data System (ADS)

    Rebich Hespanha, S.; Rice, R. E.; Montello, D. R.; Retzloff, S.; Tien, S.

    2012-12-01

    News media play a powerful role in disseminating and framing information and shaping public opinion on environmental issues. Choices of text and images that are made by the creators and distributors of news media not only influence public perception about which issues are important, but also surreptitiously lead consumers of these media to perceive certain aspects or perspectives on an issue while neglecting to consider others. Our research was motivated by a desire to obtain comprehensive quantitative and qualitative understanding of the types of information - both textual and visual -- that have been provided to the U.S. public over the past several decades through news reports about climate change. As part of this project, we documented and examined 118 themes in 19 categories presented in 350 randomly-selected visual images from U.S. news coverage of global climate change between 1969 and late 2009. This study examines how the use of imagery in print news positions climate change within public and private arenas and how it emphasizes particular geographic, political, scientific, technological, sociological, and ideological aspects of the issue.

  18. Exploiting visual search theory to infer social interactions

    NASA Astrophysics Data System (ADS)

    Rota, Paolo; Dang-Nguyen, Duc-Tien; Conci, Nicola; Sebe, Nicu

    2013-03-01

    In this paper we propose a new method to infer human social interactions using typical techniques adopted in literature for visual search and information retrieval. The main piece of information we use to discriminate among different types of interactions is provided by proxemics cues acquired by a tracker, and used to distinguish between intentional and casual interactions. The proxemics information has been acquired through the analysis of two different metrics: on the one hand we observe the current distance between subjects, and on the other hand we measure the O-space synergy between subjects. The obtained values are taken at every time step over a temporal sliding window, and processed in the Discrete Fourier Transform (DFT) domain. The features are eventually merged into an unique array, and clustered using the K-means algorithm. The clusters are reorganized using a second larger temporal window into a Bag Of Words framework, so as to build the feature vector that will feed the SVM classifier.

  19. Multisource data fusion for documenting archaeological sites

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir; Chibunichev, Alexander; Zhuravlev, Denis

    2017-10-01

    The quality of archaeological sites documenting is of great importance for cultural heritage preserving and investigating. The progress in developing new techniques and systems for data acquisition and processing creates an excellent basis for achieving a new quality of archaeological sites documenting and visualization. archaeological data has some specific features which have to be taken into account when acquiring, processing and managing. First of all, it is a needed to gather as full as possible information about findings providing no loss of information and no damage to artifacts. Remote sensing technologies are the most adequate and powerful means which satisfy this requirement. An approach to archaeological data acquiring and fusion based on remote sensing is proposed. It combines a set of photogrammetric techniques for obtaining geometrical and visual information at different scales and detailing and a pipeline for archaeological data documenting, structuring, fusion, and analysis. The proposed approach is applied for documenting of Bosporus archaeological expedition of Russian State Historical Museum.

  20. Getting more from visual working memory: Retro-cues enhance retrieval and protect from visual interference.

    PubMed

    Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus

    2016-06-01

    Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  2. Real-time visualization and analysis of airflow field by use of digital holography

    NASA Astrophysics Data System (ADS)

    Di, Jianglei; Wu, Bingjing; Chen, Xin; Liu, Junjiang; Wang, Jun; Zhao, Jianlin

    2013-04-01

    The measurement and analysis of airflow field is very important in fluid dynamics. For airflow, smoke particles can be added to visually observe the turbulence phenomena by particle tracking technology, but the effect of smoke particles to follow the high speed airflow will reduce the measurement accuracy. In recent years, with the advantage of non-contact, nondestructive, fast and full-field measurement, digital holography has been widely applied in many fields, such as deformation and vibration analysis, particle characterization, refractive index measurement, and so on. In this paper, we present a method to measure the airflow field by use of digital holography. A small wind tunnel model made of acrylic glass is built to control the velocity and direction of airflow. Different shapes of samples such as aircraft wing and cylinder are placed in the wind tunnel model to produce different forms of flow field. With a Mach-Zehnder interferometer setup, a series of digital holograms carrying the information of airflow filed distributions in different states are recorded by CCD camera and corresponding holographic images are numerically reconstructed from the holograms by computer. Then we can conveniently obtain the velocity or pressure information of the airflow deduced from the quantitative phase information of holographic images and visually display the airflow filed and its evolution in the form of a movie. The theory and experiment results show that digital holography is a robust and feasible approach for real-time visualization and analysis of airflow field.

  3. 32 CFR 811.8 - Forms prescribed and availability of publications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... FORCE SALES AND SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.8 Forms prescribed and availability of publications. (a) AF Form 833, Visual Information Request, AF Form 1340, Visual Information Support Center Workload Report, DD Form 1995, Visual Information (VI) Production...

  4. Exceptional preservation of eye structure in arthropod visual predators from the Middle Jurassic

    PubMed Central

    Vannier, Jean; Schoenemann, Brigitte; Gillot, Thomas; Charbonnier, Sylvain; Clarkson, Euan

    2016-01-01

    Vision has revolutionized the way animals explore their environment and interact with each other and rapidly became a major driving force in animal evolution. However, direct evidence of how ancient animals could perceive their environment is extremely difficult to obtain because internal eye structures are almost never fossilized. Here, we reconstruct with unprecedented resolution the three-dimensional structure of the huge compound eye of a 160-million-year-old thylacocephalan arthropod from the La Voulte exceptional fossil biota in SE France. This arthropod had about 18,000 lenses on each eye, which is a record among extinct and extant arthropods and is surpassed only by modern dragonflies. Combined information about its eyes, internal organs and gut contents obtained by X-ray microtomography lead to the conclusion that this thylacocephalan arthropod was a visual hunter probably adapted to illuminated environments, thus contradicting the hypothesis that La Voulte was a deep-water environment. PMID:26785293

  5. Disease pattern and social needs of street people in the race course area of Kano, Nigeria.

    PubMed

    Abdu, Lawan; Withers, James; Habib, Abdulrazaq G; Mijinyawa, Muhammad S; Yusef, Shehu M

    2013-02-01

    The study is aimed at examining street people on Race Course Street in Kano, Nigeria for prevalence of common diseases. Descriptive report. Institutional ethical approval was obtained. Information was obtained on age, sex, place of residence, drug habits, source of drinking water, toilet facility used, visual acuity, blood pressure, random blood sugar level, presence of skin diseases and physical disability. Sixty five subjects were examined and 7 declined. There were 16 males and 49 females (M:F=1:3). The mean age was 48 + 9.2 years. They were mainly widows, some live in the street and have no access to basic amenities and six use non-narcotic medicinal substances. Diseases observed are hypertension, visual problems, and trauma. Religious factors, socio-cultural factors, and lack of government policy leads to poor access to health care for street people.

  6. Skylab-4 visual observations project: Geological features of southwestern North America

    NASA Technical Reports Server (NTRS)

    Silver, L. T.

    1975-01-01

    Visual observations conducted by Skylab-4 crewmen on seven designated geological target areas and other targets of opportunity in parts of southwestern United States and northwestern Mexico were described. The experiments were designed to learn how effectively geologic features could be observed from orbit and what research information could be obtained from the observations when supported by ground studies. For the limited preparation they received, the crewmen demonstrated exceptional observational ability and produced outstanding photographic studies. They also formulated cogent opinions on how to improve future observational and photo-documentation techniques. From the photographs and other observations, it was possible to obtain significant research contributions to on-going field investigations. These contributions were integrated into other aspects of the ground investigations to the following topics: major faults, regional stratigraphy, occurrence of Precambrian crystalline rocks, mapping of Mesozoic volcanic rocks, regional geology.

  7. Exceptional preservation of eye structure in arthropod visual predators from the Middle Jurassic.

    PubMed

    Vannier, Jean; Schoenemann, Brigitte; Gillot, Thomas; Charbonnier, Sylvain; Clarkson, Euan

    2016-01-19

    Vision has revolutionized the way animals explore their environment and interact with each other and rapidly became a major driving force in animal evolution. However, direct evidence of how ancient animals could perceive their environment is extremely difficult to obtain because internal eye structures are almost never fossilized. Here, we reconstruct with unprecedented resolution the three-dimensional structure of the huge compound eye of a 160-million-year-old thylacocephalan arthropod from the La Voulte exceptional fossil biota in SE France. This arthropod had about 18,000 lenses on each eye, which is a record among extinct and extant arthropods and is surpassed only by modern dragonflies. Combined information about its eyes, internal organs and gut contents obtained by X-ray microtomography lead to the conclusion that this thylacocephalan arthropod was a visual hunter probably adapted to illuminated environments, thus contradicting the hypothesis that La Voulte was a deep-water environment.

  8. The influence of visual feedback and register changes on sign language production: A kinematic study with deaf signers

    PubMed Central

    EMMOREY, KAREN; GERTSBERG, NELLY; KORPICS, FRANCO; WRIGHT, CHARLES E.

    2009-01-01

    Speakers monitor their speech output by listening to their own voice. However, signers do not look directly at their hands and cannot see their own face. We investigated the importance of a visual perceptual loop for sign language monitoring by examining whether changes in visual input alter sign production. Deaf signers produced American Sign Language (ASL) signs within a carrier phrase under five conditions: blindfolded, wearing tunnel-vision goggles, normal (citation) signing, shouting, and informal signing. Three-dimensional movement trajectories were obtained using an Optotrak Certus system. Informally produced signs were shorter with less vertical movement. Shouted signs were displaced forward and to the right and were produced within a larger volume of signing space, with greater velocity, greater distance traveled, and a longer duration. Tunnel vision caused signers to produce less movement within the vertical dimension of signing space, but blind and citation signing did not differ significantly on any measure, except duration. Thus, signers do not “sign louder” when they cannot see themselves, but they do alter their sign production when vision is restricted. We hypothesize that visual feedback serves primarily to fine-tune the size of signing space rather than as input to a comprehension-based monitor. PMID:20046943

  9. The influence of visual feedback and register changes on sign language production: A kinematic study with deaf signers.

    PubMed

    Emmorey, Karen; Gertsberg, Nelly; Korpics, Franco; Wright, Charles E

    2009-01-01

    Speakers monitor their speech output by listening to their own voice. However, signers do not look directly at their hands and cannot see their own face. We investigated the importance of a visual perceptual loop for sign language monitoring by examining whether changes in visual input alter sign production. Deaf signers produced American Sign Language (ASL) signs within a carrier phrase under five conditions: blindfolded, wearing tunnel-vision goggles, normal (citation) signing, shouting, and informal signing. Three-dimensional movement trajectories were obtained using an Optotrak Certus system. Informally produced signs were shorter with less vertical movement. Shouted signs were displaced forward and to the right and were produced within a larger volume of signing space, with greater velocity, greater distance traveled, and a longer duration. Tunnel vision caused signers to produce less movement within the vertical dimension of signing space, but blind and citation signing did not differ significantly on any measure, except duration. Thus, signers do not "sign louder" when they cannot see themselves, but they do alter their sign production when vision is restricted. We hypothesize that visual feedback serves primarily to fine-tune the size of signing space rather than as input to a comprehension-based monitor.

  10. All-in-one visual and computer decoding of multiple secrets: translated-flip VC with polynomial-style sharing

    NASA Astrophysics Data System (ADS)

    Wu, Chia-Hua; Lee, Suiang-Shyan; Lin, Ja-Chen

    2017-06-01

    This all-in-one hiding method creates two transparencies that have several decoding options: visual decoding with or without translation flipping and computer decoding. In visual decoding, two less-important (or fake) binary secret images S1 and S2 can be revealed. S1 is viewed by the direct stacking of two transparencies. S2 is viewed by flipping one transparency and translating the other to a specified coordinate before stacking. Finally, important/true secret files can be decrypted by a computer using the information extracted from transparencies. The encoding process to hide this information includes the translated-flip visual cryptography, block types, the ways to use polynomial-style sharing, and linear congruential generator. If a thief obtained both transparencies, which are stored in distinct places, he still needs to find the values of keys used in computer decoding to break through after viewing S1 and/or S2 by stacking. However, the thief might just try every other kind of stacking and finally quit finding more secrets; for computer decoding is totally different from stacking decoding. Unlike traditional image hiding that uses images as host media, our method hides fine gray-level images in binary transparencies. Thus, our host media are transparencies. Comparisons and analysis are provided.

  11. Information efficiency in visual communication

    NASA Astrophysics Data System (ADS)

    Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1993-08-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  12. Information efficiency in visual communication

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1993-01-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  13. 32 CFR 811.3 - Official requests for visual information productions or materials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... THE AIR FORCE SALES AND SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.3 Official requests for visual information productions or materials. (a) Send official Air Force... 32 National Defense 6 2010-07-01 2010-07-01 false Official requests for visual information...

  14. 32 CFR 811.4 - Selling visual information materials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.4 Selling visual information materials. (a) Air Force VI activities cannot sell materials. (b) HQ AFCIC/ITSM may approve the... 32 National Defense 6 2010-07-01 2010-07-01 false Selling visual information materials. 811.4...

  15. Fuzzy-based simulation of real color blindness.

    PubMed

    Lee, Jinmi; dos Santos, Wellington P

    2010-01-01

    About 8% of men are affected by color blindness. That population is at a disadvantage since they cannot perceive a substantial amount of the visual information. This work presents two computational tools developed to assist color blind people. The first one tests color blindness and assess its severity. The second tool is based on Fuzzy Logic, and implements a method proposed to simulate real red and green color blindness in order to generate synthetic cases of color vision disturbance in a statistically significant amount. Our purpose is to develop correction tools and obtain a deeper understanding of the accessibility problems faced by people with chromatic visual impairment.

  16. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  17. Probabilistic visual and electromagnetic data fusion for robust drift-free sequential mosaicking: application to fetoscopy

    PubMed Central

    Tella-Amo, Marcel; Peter, Loic; Shakir, Dzhoshkun I.; Deprest, Jan; Iglesias, Juan Eugenio; Ourselin, Sebastien

    2018-01-01

    Abstract. The most effective treatment for twin-to-twin transfusion syndrome is laser photocoagulation of the shared vascular anastomoses in the placenta. Vascular connections are extremely challenging to locate due to their caliber and the reduced field-of-view of the fetoscope. Therefore, mosaicking techniques are beneficial to expand the scene, facilitate navigation, and allow vessel photocoagulation decision-making. Local vision-based mosaicking algorithms inherently drift over time due to the use of pairwise transformations. We propose the use of an electromagnetic tracker (EMT) sensor mounted at the tip of the fetoscope to obtain camera pose measurements, which we incorporate into a probabilistic framework with frame-to-frame visual information to achieve globally consistent sequential mosaics. We parametrize the problem in terms of plane and camera poses constrained by EMT measurements to enforce global consistency while leveraging pairwise image relationships in a sequential fashion through the use of local bundle adjustment. We show that our approach is drift-free and performs similarly to state-of-the-art global alignment techniques like bundle adjustment albeit with much less computational burden. Additionally, we propose a version of bundle adjustment that uses EMT information. We demonstrate the robustness to EMT noise and loss of visual information and evaluate mosaics for synthetic, phantom-based and ex vivo datasets. PMID:29487889

  18. Australian DefenceScience. Volume 15, Number 3, Spring

    DTIC Science & Technology

    2007-01-01

    ignition, high pressure sealing, ignitor and propellent design, and ballistics instrumentation . Validation of simulation models of internal ballistics...supplementing visual information obtained by sources such as radiography and scanning electron microscopy, revealing details about features that are not...otherwise visible. Hence, it can assist with the inspection of vital component parts that are subject to high stresses, like aircraft engine turbine

  19. Cognitive aspects of haptic form recognition by blind and sighted subjects.

    PubMed

    Bailes, S M; Lambert, R M

    1986-11-01

    Studies using haptic form recognition tasks have generally concluded that the adventitiously blind perform better than the congenitally blind, implicating the importance of early visual experience in improved spatial functioning. The hypothesis was tested that the adventitiously blind have retained some ability to encode successive information obtained haptically in terms of a global visual representation, while the congenitally blind use a coding system based on successive inputs. Eighteen blind (adventitiously and congenitally) and 18 sighted (blindfolded and performing with vision) subjects were tested on their recognition of raised line patterns when the standard was presented in segments: in immediate succession, or with unfilled intersegmental delays of 5, 10, or 15 seconds. The results did not support the above hypothesis. Three main findings were obtained: normally sighted subjects were both faster and more accurate than the other groups; all groups improved in accuracy of recognition as a function of length of interstimulus interval; sighted subjects tended to report using strategies with a strong verbal component while the blind tended to rely on imagery coding. These results are explained in terms of information-processing theory consistent with dual encoding systems in working memory.

  20. Temporal and spatio-temporal vibrotactile displays for voice fundamental frequency: an initial evaluation of a new vibrotactile speech perception aid with normal-hearing and hearing-impaired individuals.

    PubMed

    Auer, E T; Bernstein, L E; Coulter, D C

    1998-10-01

    Four experiments were performed to evaluate a new wearable vibrotactile speech perception aid that extracts fundamental frequency (F0) and displays the extracted F0 as a single-channel temporal or an eight-channel spatio-temporal stimulus. Specifically, we investigated the perception of intonation (i.e., question versus statement) and emphatic stress (i.e., stress on the first, second, or third word) under Visual-Alone (VA), Visual-Tactile (VT), and Tactile-Alone (TA) conditions and compared performance using the temporal and spatio-temporal vibrotactile display. Subjects were adults with normal hearing in experiments I-III and adults with severe to profound hearing impairments in experiment IV. Both versions of the vibrotactile speech perception aid successfully conveyed intonation. Vibrotactile stress information was successfully conveyed, but vibrotactile stress information did not enhance performance in VT conditions beyond performance in VA conditions. In experiment III, which involved only intonation identification, a reliable advantage for the spatio-temporal display was obtained. Differences between subject groups were obtained for intonation identification, with more accurate VT performance by those with normal hearing. Possible effects of long-term hearing status are discussed.

  1. Indoor Photogrammetry Aided with Uwb Navigation

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Fissore, F.; Guarnieri, A.; Vettore, A.

    2018-05-01

    The subject of photogrammetric surveying with mobile devices, in particular smartphones, is becoming of significant interest in the research community. Nowadays, the process of providing 3D point clouds with photogrammetric procedures is well known. However, external information is still typically needed in order to move from the point cloud obtained from images to a 3D metric reconstruction. This paper investigates the integration of information provided by an UWB positioning system with visual based reconstruction to produce a metric reconstruction. Furthermore, the orientation (with respect to North-East directions) of the obtained model is assessed thanks to the use of inertial sensors included in the considered UWB devices. Results of this integration are shown on two case studies in indoor environments.

  2. Application of Integrated Photogrammetric and Terrestrial Laser Scanning Data to Cultural Heritage Surveying

    NASA Astrophysics Data System (ADS)

    Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz

    2017-12-01

    The terrestrial laser scanning technology has a wide spectrum of applications, from land surveying, civil engineering and architecture to archaeology. The technology is capable of obtaining, in a short time, accurate coordinates of points which represent the surface of objects. Scanning of buildings is therefore a process which ensures obtaining information on all structural elements a building. The result is a point cloud consisting of millions of elements which are a perfect source of information on the object and its surrounding. The photogrammetric techniques allow documenting an object in high resolution in the form of orthophoto plans, or are a basis to develop 2D documentation or obtain point clouds for objects and 3D modelling. Integration of photogrammetric data and TLS brings a new quality in surveying historic monuments. Historic monuments play an important cultural and historical role. Centuries-old buildings require constant renovation and preservation of their structural and visual invariability while maintaining safety of people who use them. The full process of surveying allows evaluating the actual condition of monuments and planning repairs and renovations. Huge sizes and specific types of historic monuments cause problems in obtaining reliable and full information on them. The TLS technology allows obtaining such information in a short time and is non-invasive. A point cloud is not only a basis for developing architectural and construction documentation or evaluation of actual condition of a building. It also is a real visualization of monuments and their entire environment. The saved image of object surface can be presented at any time and place. A cyclical TLS survey of historic monuments allows detecting structural changes and evaluating damage and changes that cause deformation of monument’s components. The paper presents application of integrated photogrammetric data and TLS illustrated on an example of historic monuments from southern Poland. The cartographic materials are a basis for determining the actual condition of monuments and performing repair works. The materials also supplement the archive of monuments by means of recording the actual image of a monument in a virtual space.

  3. VisGets: coordinated visualizations for web-based information exploration and discovery.

    PubMed

    Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey

    2008-01-01

    In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.

  4. Age-equivalent top-down modulation during cross-modal selective attention.

    PubMed

    Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam

    2014-12-01

    Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.

  5. TMS effects on subjective and objective measures of vision: stimulation intensity and pre- versus post-stimulus masking.

    PubMed

    de Graaf, Tom A; Cornelsen, Sonja; Jacobs, Christianne; Sack, Alexander T

    2011-12-01

    Transcranial magnetic stimulation (TMS) can be used to mask visual stimuli, disrupting visual task performance or preventing visual awareness. While TMS masking studies generally fix stimulation intensity, we hypothesized that varying the intensity of TMS pulses in a masking paradigm might inform several ongoing debates concerning TMS disruption of vision as measured subjectively versus objectively, and pre-stimulus (forward) versus post-stimulus (backward) TMS masking. We here show that both pre-stimulus TMS pulses and post-stimulus TMS pulses could strongly mask visual stimuli. We found no dissociations between TMS effects on the subjective and objective measures of vision for any masking window or intensity, ruling out the option that TMS intensity levels determine whether dissociations between subjective and objective vision are obtained. For the post-stimulus time window particularly, we suggest that these data provide new constraints for (e.g. recurrent) models of vision and visual awareness. Finally, our data are in line with the idea that pre-stimulus masking operates differently from conventional post-stimulus masking. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Developing a Data Visualization System for the Bank of America Chicago Marathon (Chicago, Illinois USA).

    PubMed

    Hanken, Taylor; Young, Sam; Smilowitz, Karen; Chiampas, George; Waskowski, David

    2016-10-01

    As one of the largest marathons worldwide, the Bank of America Chicago Marathon (BACCM; Chicago, Illinois USA) accumulates high volumes of data. Race organizers and engaged agencies need the ability to access specific data in real-time. This report details a data visualization system designed for the Chicago Marathon and establishes key principles for event management data visualization. The data visualization system allows for efficient data communication among the organizing agencies of Chicago endurance events. Agencies can observe the progress of the race throughout the day and obtain needed information, such as the number and location of runners on the course and current weather conditions. Implementation of the system can reduce time-consuming, face-to-face interactions between involved agencies by having key data streams in one location, streamlining communications with the purpose of improving race logistics, as well as medical preparedness and response. Hanken T , Young S , Smilowitz K , Chiampas G , Waskowski D . Developing a data visualization system for the Bank of America Chicago Marathon (Chicago, Illinois USA). Prehosp Disaster Med. 2016;31(5):572-577.

  7. AWE: Aviation Weather Data Visualization Environment

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Lodha, Suresh K.; Norvig, Peter (Technical Monitor)

    2000-01-01

    Weather is one of the major causes of aviation accidents. General aviation (GA) flights account for 92% of all the aviation accidents, In spite of all the official and unofficial sources of weather visualization tools available to pilots, there is an urgent need for visualizing several weather related data tailored for general aviation pilots. Our system, Aviation Weather Data Visualization Environment AWE), presents graphical displays of meteorological observations, terminal area forecasts, and winds aloft forecasts onto a cartographic grid specific to the pilot's area of interest. Decisions regarding the graphical display and design are made based on careful consideration of user needs. Integral visual display of these elements of weather reports is designed for the use of GA pilots as a weather briefing and route selection tool. AWE provides linking of the weather information to the flight's path and schedule. The pilot can interact with the system to obtain aviation-specific weather for the entire area or for his specific route to explore what-if scenarios and make "go/no-go" decisions. The system, as evaluated by some pilots at NASA Ames Research Center, was found to be useful.

  8. Constructing and Reading Visual Information: Visual Literacy for Library and Information Science Education

    ERIC Educational Resources Information Center

    Ma, Yan

    2015-01-01

    This article examines visual literacy education and research for library and information science profession to educate the information professionals who will be able to execute and implement the ACRL (Association of College and Research Libraries) Visual Literacy Competency Standards successfully. It is a continuing call for inclusion of visual…

  9. Haptograph Representation of Real-World Haptic Information by Wideband Force Control

    NASA Astrophysics Data System (ADS)

    Katsura, Seiichiro; Irie, Kouhei; Ohishi, Kiyoshi

    Artificial acquisition and reproduction of human sensations are basic technologies of communication engineering. For example, auditory information is obtained by a microphone, and a speaker reproduces it by artificial means. Furthermore, a video camera and a television make it possible to transmit visual sensation by broadcasting. On the contrary, since tactile or haptic information is subject to the Newton's “law of action and reaction” in the real world, a device which acquires, transmits, and reproduces the information has not been established. From the point of view, real-world haptics is the key technology for future haptic communication engineering. This paper proposes a novel acquisition method of haptic information named “haptograph”. The haptograph visualizes the haptic information like photograph. The proposed haptograph is applied to haptic recognition of the contact environment. A linear motor contacts to the surface of the environment and its reaction force is used to make a haptograph. A robust contact motion and sensor-less sensing of the reaction force are attained by using a disturbance observer. As a result, an encyclopedia of contact environment is attained. Since temporal and spatial analyses are conducted to represent haptic information as the haptograph, it is possible to be recognized and to be evaluated intuitively.

  10. The use of a tactile interface to convey position and motion perceptions

    NASA Technical Reports Server (NTRS)

    Rupert, A. H.; Guedry, F. E.; Reschke, M. F.

    1994-01-01

    Under normal terrestrial conditions, perception of position and motion is determined by central nervous system integration of concordant and redundant information from multiple sensory channels (somatosensory, vestibular, visual), which collectively yield vertical perceptions. In the acceleration environment experienced by the pilots, the somatosensory and vestibular sensors frequently present false information concerning the direction of gravity. When presented with conflicting sensory information, it is normal for pilots to experience episodes of disorientation. We have developed a tactile interface that obtains vertical roll and pitch information from a gyro-stabilized attitude indicator and maps this information in a one-to-one correspondence onto the torso of the body using a matrix of vibrotactors. This enables the pilot to continuously maintain an awareness of aircraft attitude without reference to visual cues, utilizing a sensory channel that normally operates at the subconscious level. Although initially developed to improve pilot spatial awareness, this device has obvious applications to 1) simulation and training, 2) nonvisual tracking of targets, which can reduce the need for pilots to make head movements in the high-G environment of aerial combat, and 3) orientation in environments with minimal somatosensory cues (e.g., underwater) or gravitational cues (e.g., space).

  11. Learning to rank using user clicks and visual features for image retrieval.

    PubMed

    Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong

    2015-04-01

    The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.

  12. Health literacy issues among women with visual impairments.

    PubMed

    Harrison, Tracie C; Mackert, Michael; Watkins, Casey

    2010-01-01

    The purpose of this secondary analysis using qualitative description was to explore health literacy using the health care experiences of women with permanent visual impairments (VIs). Interviews were analyzed from a sample of 15 community-dwelling women ages 44 to 79 with permanent VIs who had participated in a larger grounded theory study. The 15 women were interviewed twice; the audio-recorded interviews were then transcribed verbatim and analyzed using qualitative content analysis. Using the Institute of Medicine's definition of health literacy, the women's experiences were categorized into their ability to obtain, process, and understand health information. Their perceptions of the factors that influenced their health literacy were also explored. The women voiced that barriers to their ability to gain information in a format amenable to their processing skills, combined with barriers arising from health care providers' attitudes, undermined their ability to build health literacy capacity. Copyright 2010, SLACK Incorporated.

  13. Visual space under free viewing conditions.

    PubMed

    Doumen, Michelle J A; Kappers, Astrid M L; Koenderink, Jan J

    2005-10-01

    Most research on visual space has been done under restricted viewing conditions and in reduced environments. In our experiments, observers performed an exocentric pointing task, a collinearity task, and a parallelity task in a entirely visible room. We varied the relative distances between the objects and the observer and the separation angle between the two objects. We were able to compare our data directly with data from experiments in an environment with less monocular depth information present. We expected that in a richer environment and under less restrictive viewing conditions, the settings would deviate less from the veridical settings. However, large systematic deviations from veridical settings were found for all three tasks. The structure of these deviations was task dependent, and the structure and the deviations themselves were comparable to those obtained under more restricted circumstances. Thus, the additional information was not used effectively by the observers.

  14. FlowSOM: Using self-organizing maps for visualization and interpretation of cytometry data.

    PubMed

    Van Gassen, Sofie; Callebaut, Britt; Van Helden, Mary J; Lambrecht, Bart N; Demeester, Piet; Dhaene, Tom; Saeys, Yvan

    2015-07-01

    The number of markers measured in both flow and mass cytometry keeps increasing steadily. Although this provides a wealth of information, it becomes infeasible to analyze these datasets manually. When using 2D scatter plots, the number of possible plots increases exponentially with the number of markers and therefore, relevant information that is present in the data might be missed. In this article, we introduce a new visualization technique, called FlowSOM, which analyzes Flow or mass cytometry data using a Self-Organizing Map. Using a two-level clustering and star charts, our algorithm helps to obtain a clear overview of how all markers are behaving on all cells, and to detect subsets that might be missed otherwise. R code is available at https://github.com/SofieVG/FlowSOM and will be made available at Bioconductor. © 2015 International Society for Advancement of Cytometry.

  15. Unsupervised real-time speaker identification for daily movies

    NASA Astrophysics Data System (ADS)

    Li, Ying; Kuo, C.-C. Jay

    2002-07-01

    The problem of identifying speakers for movie content analysis is addressed in this paper. While most previous work on speaker identification was carried out in a supervised mode using pure audio data, more robust results can be obtained in real-time by integrating knowledge from multiple media sources in an unsupervised mode. In this work, both audio and visual cues will be employed and subsequently combined in a probabilistic framework to identify speakers. Particularly, audio information is used to identify speakers with a maximum likelihood (ML)-based approach while visual information is adopted to distinguish speakers by detecting and recognizing their talking faces based on face detection/recognition and mouth tracking techniques. Moreover, to accommodate for speakers' acoustic variations along time, we update their models on the fly by adapting to their newly contributed speech data. Encouraging results have been achieved through extensive experiments, which shows a promising future of the proposed audiovisual-based unsupervised speaker identification system.

  16. Online characterization of planetary surfaces: PlanetServer, an open-source analysis and visualization tool

    NASA Astrophysics Data System (ADS)

    Marco Figuera, R.; Pham Huu, B.; Rossi, A. P.; Minin, M.; Flahaut, J.; Halder, A.

    2018-01-01

    The lack of open-source tools for hyperspectral data visualization and analysis creates a demand for new tools. In this paper we present the new PlanetServer, a set of tools comprising a web Geographic Information System (GIS) and a recently developed Python Application Programming Interface (API) capable of visualizing and analyzing a wide variety of hyperspectral data from different planetary bodies. Current WebGIS open-source tools are evaluated in order to give an overview and contextualize how PlanetServer can help in this matters. The web client is thoroughly described as well as the datasets available in PlanetServer. Also, the Python API is described and exposed the reason of its development. Two different examples of mineral characterization of different hydrosilicates such as chlorites, prehnites and kaolinites in the Nili Fossae area on Mars are presented. As the obtained results show positive outcome in hyperspectral analysis and visualization compared to previous literature, we suggest using the PlanetServer approach for such investigations.

  17. A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays

    NASA Technical Reports Server (NTRS)

    Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.

    2000-01-01

    Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.

  18. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    ERIC Educational Resources Information Center

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search.…

  19. 32 CFR 813.1 - Purpose of the visual information documentation (VIDOC) program.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Purpose of the visual information documentation (VIDOC) program. 813.1 Section 813.1 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE SALES AND SERVICES VISUAL INFORMATION DOCUMENTATION PROGRAM § 813.1 Purpose of the visual information documentation (VIDOC) program....

  20. Emotional Effects in Visual Information Processing

    DTIC Science & Technology

    2009-10-24

    1 Emotional Effects on Visual Information Processing FA4869-08-0004 AOARD 074018 Report October 24, 2009...TITLE AND SUBTITLE Emotional Effects in Visual Information Processing 5a. CONTRACT NUMBER FA48690810004 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...objective of this research project was to investigate how emotion influences visual information processing and the neural correlates of the effects

  1. Improving the color fidelity of cameras for advanced television systems

    NASA Astrophysics Data System (ADS)

    Kollarits, Richard V.; Gibbon, David C.

    1992-08-01

    In this paper we compare the accuracy of the color information obtained from television cameras using three and five wavelength bands. This comparison is based on real digital camera data. The cameras are treated as colorimeters whose characteristics are not linked to that of the display. The color matrices for both cameras were obtained by identical optimization procedures that minimized the color error The color error for the five band camera is 2. 5 times smaller than that obtained from the three band camera. Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display. Because the outputs from the five band camera are reduced to the normal three channels conventionally used for display there need be no increase in signal handling complexity outside the camera. Likewise it is possible to construct a five band camera using only three sensors as in conventional cameras. The principal drawback of the five band camera is the reduction in effective camera sensitivity by about 3/4 of an I stop. 1.

  2. Web-GIS-based SARS epidemic situation visualization

    NASA Astrophysics Data System (ADS)

    Lu, Xiaolin

    2004-03-01

    In order to research, perform statistical analysis and broadcast the information of SARS epidemic situation according to the relevant spatial position, this paper proposed a unified global visualization information platform for SARS epidemic situation based on Web-GIS and scientific virtualization technology. To setup the unified global visual information platform, the architecture of Web-GIS based interoperable information system is adopted to enable public report SARS virus information to health cure center visually by using the web visualization technology. A GIS java applet is used to visualize the relationship between spatial graphical data and virus distribution, and other web based graphics figures such as curves, bars, maps and multi-dimensional figures are used to visualize the relationship between SARS virus tendency with time, patient number or locations. The platform is designed to display the SARS information in real time, simulate visually for real epidemic situation and offer an analyzing tools for health department and the policy-making government department to support the decision-making for preventing against the SARS epidemic virus. It could be used to analyze the virus condition through visualized graphics interface, isolate the areas of virus source, and control the virus condition within shortest time. It could be applied to the visualization field of SARS preventing systems for SARS information broadcasting, data management, statistical analysis, and decision supporting.

  3. Feature-Based Memory-Driven Attentional Capture: Visual Working Memory Content Affects Visual Attention

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…

  4. Visual field progression in glaucoma: total versus pattern deviation analyses.

    PubMed

    Artes, Paul H; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C

    2005-12-01

    To compare visual field progression with total and pattern deviation analyses in a prospective longitudinal study of patients with glaucoma and healthy control subjects. A group of 101 patients with glaucoma (168 eyes) with early to moderately advanced visual field loss at baseline (average mean deviation [MD], -3.9 dB) and no clinical evidence of media opacity were selected from a prospective longitudinal study on visual field progression in glaucoma. Patients were examined with static automated perimetry at 6-month intervals for a median follow-up of 9 years. At each test location, change was established with event and trend analyses of total and pattern deviation. The event analyses compared each follow-up test to a baseline obtained from averaging the first two tests, and visual field progression was defined as deterioration beyond the 5th percentile of test-retest variability at three test locations, observed on three consecutive tests. The trend analyses were based on point-wise linear regression, and visual field progression was defined as statistically significant deterioration (P < 5%) worse than -1 dB/year at three locations, confirmed by independently omitting the last and the penultimate observation. The incidence and the time-to-progression were compared between total and pattern deviation analyses. To estimate the specificity of the progression analyses, identical criteria were applied to visual fields obtained in 102 healthy control subjects, and the rate of visual field improvement was established in the patients with glaucoma and the healthy control subjects. With both event and trend methods, pattern deviation analyses classified approximately 15% fewer eyes as having progressed than did the total deviation analyses. In eyes classified as progressing by both the total and pattern deviation methods, total deviation analyses tended to detect progression earlier than the pattern deviation analyses. A comparison of the changes observed in MD and the visual fields' general height (estimated by the 85th percentile of the total deviation values) confirmed that change in the glaucomatous eyes almost always comprised a diffuse component. Pattern deviation analyses of progression may therefore underestimate the true amount of glaucomatous visual field progression. Pattern deviation analyses of visual field progression may underestimate visual field progression in glaucoma, particularly when there is no clinical evidence of increasing media opacity. Clinicians should have access to both total and pattern deviation analyses to make informed decisions on visual field progression in glaucoma.

  5. Spectral discrimination in color blind animals via chromatic aberration and pupil shape

    PubMed Central

    Stubbs, Alexander L.; Stubbs, Christopher W.

    2016-01-01

    We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide “color-blind” animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. We quantitatively show, through numerical simulations, how chromatic aberration can be exploited to obtain spectral information, especially through nonaxial pupils that are characteristic of coleoid cephalopods. We have also assessed the inherent ambiguity between range and color that is a consequence of the chromatic variation of best focus with wavelength. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins. PMID:27382180

  6. A hierarchical word-merging algorithm with class separability measure.

    PubMed

    Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan

    2014-03-01

    In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.

  7. A way toward analyzing high-content bioimage data by means of semantic annotation and visual data mining

    NASA Astrophysics Data System (ADS)

    Herold, Julia; Abouna, Sylvie; Zhou, Luxian; Pelengaris, Stella; Epstein, David B. A.; Khan, Michael; Nattkemper, Tim W.

    2009-02-01

    In the last years, bioimaging has turned from qualitative measurements towards a high-throughput and highcontent modality, providing multiple variables for each biological sample analyzed. We present a system which combines machine learning based semantic image annotation and visual data mining to analyze such new multivariate bioimage data. Machine learning is employed for automatic semantic annotation of regions of interest. The annotation is the prerequisite for a biological object-oriented exploration of the feature space derived from the image variables. With the aid of visual data mining, the obtained data can be explored simultaneously in the image as well as in the feature domain. Especially when little is known of the underlying data, for example in the case of exploring the effects of a drug treatment, visual data mining can greatly aid the process of data evaluation. We demonstrate how our system is used for image evaluation to obtain information relevant to diabetes study and screening of new anti-diabetes treatments. Cells of the Islet of Langerhans and whole pancreas in pancreas tissue samples are annotated and object specific molecular features are extracted from aligned multichannel fluorescence images. These are interactively evaluated for cell type classification in order to determine the cell number and mass. Only few parameters need to be specified which makes it usable also for non computer experts and allows for high-throughput analysis.

  8. An interdisciplinary visual team in an acute and sub-acute stroke unit: Providing assessment and early rehabilitation.

    PubMed

    Norup, Anne; Guldberg, Anne-Mette; Friis, Claus Radmer; Deurell, Eva Maria; Forchhammer, Hysse Birgitte

    2016-07-15

    To describe the work of an interdisciplinary visual team in a stroke unit providing early identification and assessment of patients with visual symptoms, and secondly to investigate frequency, type of visual deficits after stroke and self-evaluated impact on everyday life after stroke. For a period of three months, all stroke patients with visual or visuo-attentional deficits were registered, and data concerning etiology, severity and localization of the stroke and initial visual symptoms were registered. One month after discharge patients were contacted for follow-up. Of 349 acute stroke admissions, 84 (24.1%) had visual or visuo-attentional deficits initially. Of these 84 patients, informed consent was obtained from 22 patients with a mean age of 67.7 years(SD 10.1), and the majority was female (59.1%). Based on the initial neurological examination, 45.4% had some kind of visual field defect, 27.2% had some kind of oculomotor nerve palsy, and about 31.8% had some kind of inattention or visual neglect. The patients were contacted for a phone-based follow-up one month after discharge, where 85.7% reported changes in their vision since their stroke. In this consecutive sample, a quarter of all stroke patients had visual or visuo-attentional deficits initially. This emphasizes how professionals should have increased awareness of the existence of such deficits after stroke in order to provide the necessary interdisciplinary assessment and rehabilitation.

  9. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  10. Behaviorally Relevant Abstract Object Identity Representation in the Human Parietal Cortex

    PubMed Central

    Jeong, Su Keun

    2016-01-01

    The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a “content-poor” view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be “content rich” and capable of directly participating in goal-driven visual information representation in the brain. SIGNIFICANCE STATEMENT The representation of object identity (including faces) is fundamental to human vision and shapes how we interact with the world. Although object representation has traditionally been associated with human occipital and temporal cortices, here we show, by measuring fMRI response patterns, that a region in the human parietal cortex can robustly represent task-relevant object identities. These representations are invariant to changes in a host of visual features, such as viewpoint, and reflect an abstract level of representation that has not previously been reported in the human parietal cortex. Critically, these neural representations are behaviorally relevant as they closely track the perceived object identities. Human parietal cortex thus participates in the moment-to-moment goal-directed visual information representation in the brain. PMID:26843642

  11. Perception and control of rotorcraft flight

    NASA Technical Reports Server (NTRS)

    Owen, Dean H.

    1991-01-01

    Three topics which can be applied to rotorcraft flight are examined: (1) the nature of visual information; (2) what visual information is informative about; and (3) the control of visual information. The anchorage of visual perception is defined as the distribution of structure in the surrounding optical array or the distribution of optical structure over the retinal surface. A debate was provoked about whether the referent of visual event perception, and in turn control, is optical motion, kinetics, or dynamics. The interface of control theory and visual perception is also considered. The relationships among these problems is the basis of this article.

  12. The Employment Effects of High-Technology: A Case Study of Machine Vision. Research Report No. 86-19.

    ERIC Educational Resources Information Center

    Chen, Kan; Stafford, Frank P.

    A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…

  13. An infrared/video fusion system for military robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.W.; Roberts, R.S.

    1997-08-05

    Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less

  14. A Notation for Rapid Specification of Information Visualization

    ERIC Educational Resources Information Center

    Lee, Sang Yun

    2013-01-01

    This thesis describes a notation for rapid specification of information visualization, which can be used as a theoretical framework of integrating various types of information visualization, and its applications at a conceptual level. The notation is devised to codify the major characteristics of data/visual structures in conventionally-used…

  15. Mesoporous silica nanoparticles functionalized with fluorescent and MRI reporters for the visualization of murine tumors overexpressing αvβ3 receptors

    NASA Astrophysics Data System (ADS)

    Hu, He; Arena, Francesca; Gianolio, Eliana; Boffa, Cinzia; di Gregorio, Enza; Stefania, Rachele; Orio, Laura; Baroni, Simona; Aime, Silvio

    2016-03-01

    A novel fluorescein/Gd-DOTAGA containing nanoprobe for the visualization of tumors by optical and Magnetic Resonance Imaging (MRI) is reported herein. It is based on the functionalization of the surface of small mesoporous silica nanoparticles (MSNs) (~30 nm) with the arginine-glycine-aspartic (RGD) moieties, which are known to target αvβ3 integrin receptors overexpressed in several tumor cells. The obtained nanoprobe (Gd-MSNs-RGD) displays good stability, tolerability and high relaxivity (37.6 mM-1 s-1 at 21.5 MHz). After a preliminary evaluation of their cytotoxicity and targeting capability toward U87MG cells by in vitro fluorescence and MR imaging, the nanoprobes were tested in vivo by T1-weighted MR imaging of xenografted murine tumor models. The obtained results demonstrated that the Gd-MSNs-RGD nanoprobes are good reporters both in vitro and in vivo for the MR-visualization of tumor cells overexpressing αvβ3 integrin receptors.A novel fluorescein/Gd-DOTAGA containing nanoprobe for the visualization of tumors by optical and Magnetic Resonance Imaging (MRI) is reported herein. It is based on the functionalization of the surface of small mesoporous silica nanoparticles (MSNs) (~30 nm) with the arginine-glycine-aspartic (RGD) moieties, which are known to target αvβ3 integrin receptors overexpressed in several tumor cells. The obtained nanoprobe (Gd-MSNs-RGD) displays good stability, tolerability and high relaxivity (37.6 mM-1 s-1 at 21.5 MHz). After a preliminary evaluation of their cytotoxicity and targeting capability toward U87MG cells by in vitro fluorescence and MR imaging, the nanoprobes were tested in vivo by T1-weighted MR imaging of xenografted murine tumor models. The obtained results demonstrated that the Gd-MSNs-RGD nanoprobes are good reporters both in vitro and in vivo for the MR-visualization of tumor cells overexpressing αvβ3 integrin receptors. Electronic supplementary information (ESI) available: Absorption and emission spectra, energy dispersive X-ray analysis (EDXA) and XPS spectra, TGA, zeta-potential and the molecular structures of the Gd-complexes. See DOI: 10.1039/c5nr08878j

  16. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  17. Visual Working Memory Load-Related Changes in Neural Activity and Functional Connectivity

    PubMed Central

    Li, Ling; Zhang, Jin-Xiang; Jiang, Tao

    2011-01-01

    Background Visual working memory (VWM) helps us store visual information to prepare for subsequent behavior. The neuronal mechanisms for sustaining coherent visual information and the mechanisms for limited VWM capacity have remained uncharacterized. Although numerous studies have utilized behavioral accuracy, neural activity, and connectivity to explore the mechanism of VWM retention, little is known about the load-related changes in functional connectivity for hemi-field VWM retention. Methodology/Principal Findings In this study, we recorded electroencephalography (EEG) from 14 normal young adults while they performed a bilateral visual field memory task. Subjects had more rapid and accurate responses to the left visual field (LVF) memory condition. The difference in mean amplitude between the ipsilateral and contralateral event-related potential (ERP) at parietal-occipital electrodes in retention interval period was obtained with six different memory loads. Functional connectivity between 128 scalp regions was measured by EEG phase synchronization in the theta- (4–8 Hz), alpha- (8–12 Hz), beta- (12–32 Hz), and gamma- (32–40 Hz) frequency bands. The resulting matrices were converted to graphs, and mean degree, clustering coefficient and shortest path length was computed as a function of memory load. The results showed that brain networks of theta-, alpha-, beta-, and gamma- frequency bands were load-dependent and visual-field dependent. The networks of theta- and alpha- bands phase synchrony were most predominant in retention period for right visual field (RVF) WM than for LVF WM. Furthermore, only for RVF memory condition, brain network density of theta-band during the retention interval were linked to the delay of behavior reaction time, and the topological property of alpha-band network was negative correlation with behavior accuracy. Conclusions/Significance We suggest that the differences in theta- and alpha- bands between LVF and RVF conditions in functional connectivity and topological properties during retention period may result in the decline of behavioral performance in RVF task. PMID:21789253

  18. Infrared Microtransmission And Microreflectance Of Biological Systems

    NASA Astrophysics Data System (ADS)

    Hill, Steve L.; Krishnan, K.; Powell, Jay R.

    1989-12-01

    The infrared microsampling technique has been successfully applied to a variety of biological systems. A microtomed tissue section may be prepared to permit both visual and infrared discrimination. Infrared structural information may be obtained for a single cell, and computer-enhanced images of tissue specimens may be calculated from spectral map data sets. An analysis of a tissue section anomaly may gg suest eitherprotein compositional differences or a localized concentration of foreign matterp. Opaque biological materials such as teeth, gallstones, and kidney stones may be analyzed by microreflectance spectroscop. Absorption anomalies due to specular dispersion are corrected with the Kraymers-Kronig transformation. Corrected microreflectance spectra may contribute to compositional analysis and correlate diseased-related spectral differences to visual specimen anomalies.

  19. Neutron and positron techniques for fluid transfer system analysis and remote temperature and stress measurement

    NASA Astrophysics Data System (ADS)

    Stewart, P. A. E.

    1987-05-01

    Present and projected applications of penetrating radiation techniques to gas turbine research and development are considered. Approaches discussed include the visualization and measurement of metal component movement using high energy X-rays, the measurement of metal temperatures using epithermal neutrons, the measurement of metal stresses using thermal neutron diffraction, and the visualization and measurement of oil and fuel systems using either cold neutron radiography or emitting isotope tomography. By selecting the radiation appropriate to the problem, the desired data can be probed for and obtained through imaging or signal acquisition, and the necessary information can then be extracted with digital image processing or knowledge based image manipulation and pattern recognition.

  20. Determination of atomic site susceptibility tensors from neutron diffraction data on polycrystalline samples.

    PubMed

    Gukasov, A; Brown, P J

    2010-12-22

    Polarized neutron diffraction can provide information about the atomic site susceptibility tensor χ(ij) characterizing the magnetic response of individual atoms to an external magnetic field (Gukasov and Brown 2002 J. Phys.: Condens. Mater. 14 8831). The six independent atomic susceptibility parameters (ASPs) can be determined from polarized neutron flipping ratio measurements on single crystals and visualized as magnetic ellipsoids which are analogous to the thermal ellipsoids obtained from atomic displacement parameters (ADPs). We demonstrate now that the information about local magnetic susceptibility at different magnetic sites in a crystal can also be obtained from polarized and unpolarized neutron diffraction measurements on magnetized powder samples. The validity of the method is illustrated by the results of such measurements on a polycrystalline sample of Tb(2)Sn(2)O(7).

  1. Toward statistical modeling of saccadic eye-movement and visual saliency.

    PubMed

    Sun, Xiaoshuai; Yao, Hongxun; Ji, Rongrong; Liu, Xian-Ming

    2014-11-01

    In this paper, we present a unified statistical framework for modeling both saccadic eye movements and visual saliency. By analyzing the statistical properties of human eye fixations on natural images, we found that human attention is sparsely distributed and usually deployed to locations with abundant structural information. This observations inspired us to model saccadic behavior and visual saliency based on super-Gaussian component (SGC) analysis. Our model sequentially obtains SGC using projection pursuit, and generates eye movements by selecting the location with maximum SGC response. Besides human saccadic behavior simulation, we also demonstrated our superior effectiveness and robustness over state-of-the-arts by carrying out dense experiments on synthetic patterns and human eye fixation benchmarks. Multiple key issues in saliency modeling research, such as individual differences, the effects of scale and blur, are explored in this paper. Based on extensive qualitative and quantitative experimental results, we show promising potentials of statistical approaches for human behavior research.

  2. Using NASA's Giovanni Web Portal to Access and Visualize Satellite-based Earth Science Data in the Classroom

    NASA Technical Reports Server (NTRS)

    Lloyd, Steven; Acker, James G.; Prados, Ana I.; Leptoukh, Gregory G.

    2008-01-01

    One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite-based remote sensing data sets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable data set to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface.

  3. Research on flight stability performance of rotor aircraft based on visual servo control method

    NASA Astrophysics Data System (ADS)

    Yu, Yanan; Chen, Jing

    2016-11-01

    control method based on visual servo feedback is proposed, which is used to improve the attitude of a quad-rotor aircraft and to enhance its flight stability. Ground target images are obtained by a visual platform fixed on aircraft. Scale invariant feature transform (SIFT) algorism is used to extract image feature information. According to the image characteristic analysis, fast motion estimation is completed and used as an input signal of PID flight control system to realize real-time status adjustment in flight process. Imaging tests and simulation results show that the method proposed acts good performance in terms of flight stability compensation and attitude adjustment. The response speed and control precision meets the requirements of actual use, which is able to reduce or even eliminate the influence of environmental disturbance. So the method proposed has certain research value to solve the problem of aircraft's anti-disturbance.

  4. Pathogen metadata platform: software for accessing and analyzing pathogen strain information.

    PubMed

    Chang, Wenling E; Peterson, Matthew W; Garay, Christopher D; Korves, Tonia

    2016-09-15

    Pathogen metadata includes information about where and when a pathogen was collected and the type of environment it came from. Along with genomic nucleotide sequence data, this metadata is growing rapidly and becoming a valuable resource not only for research but for biosurveillance and public health. However, current freely available tools for analyzing this data are geared towards bioinformaticians and/or do not provide summaries and visualizations needed to readily interpret results. We designed a platform to easily access and summarize data about pathogen samples. The software includes a PostgreSQL database that captures metadata useful for disease outbreak investigations, and scripts for downloading and parsing data from NCBI BioSample and BioProject into the database. The software provides a user interface to query metadata and obtain standardized results in an exportable, tab-delimited format. To visually summarize results, the user interface provides a 2D histogram for user-selected metadata types and mapping of geolocated entries. The software is built on the LabKey data platform, an open-source data management platform, which enables developers to add functionalities. We demonstrate the use of the software in querying for a pathogen serovar and for genome sequence identifiers. This software enables users to create a local database for pathogen metadata, populate it with data from NCBI, easily query the data, and obtain visual summaries. Some of the components, such as the database, are modular and can be incorporated into other data platforms. The source code is freely available for download at https://github.com/wchangmitre/bioattribution .

  5. Silk wrapping of nuptial gifts as visual signal for female attraction in a crepuscular spider

    NASA Astrophysics Data System (ADS)

    Trillo, Mariana C.; Melo-González, Valentina; Albo, Maria J.

    2014-02-01

    An extensive diversity of nuptial gifts is known in invertebrates, but prey wrapped in silk is a unique type of gift present in few insects and spiders. Females from spider species prefer males offering a gift accepting more and longer matings than when males offered no gift. Silk wrapping of the gift is not essential to obtain a mating, but appears to increase the chance of a mating evidencing a particularly intriguing function of this trait. Consequently, as other secondary sexual traits, silk wrapping may be an important trait under sexual selection, if it is used by females as a signal providing information on male quality. We aimed to understand whether the white color of wrapped gifts is used as visual signal during courtship in the spider Paratrechalea ornata. We studied if a patch of white paint on the males' chelicerae is attractive to females by exposing females to males: with their chelicerae painted white; without paint; and with the sternum painted white (paint control). Females contacted males with white chelicerae more often and those males obtained higher mating success than other males. Thereafter, we explored whether silk wrapping is a condition-dependent trait and drives female visual attraction. We exposed good and poor condition males, carrying a prey, to the female silk. Males in poor condition added less silk to the prey than males in good condition, indicating that gift wrapping is an indicator of male quality and may be used by females to acquire information of the potential mate.

  6. Physiological and morphological characterization of ganglion cells in the salamander retina

    PubMed Central

    Wang, Jing; Jacoby, Roy; Wu, Samuel M.

    2016-01-01

    Retinal ganglion cells (RGCs) integrate visual information from the retina and transmit collective signals to the brain. A systematic investigation of functional and morphological characteristics of various types of RGCs is important to comprehensively understand how the visual system encodes and transmits information via various RGC pathways. This study evaluated both physiological and morphological properties of 67 RGCs in dark-adapted flat-mounted salamander retina by examining light-evoked cation and chloride current responses via voltage-clamp recordings and visualizing morphology by Lucifer yellow fluorescence with a confocal microscope. Six groups of RGCs were described: asymmetrical ON–OFF RGCs, symmetrical ON RGCs, OFF RGCs, and narrow-, medium- and wide-field ON–OFF RGCs. Dendritic field diameters of RGCs ranged 102–490 µm: narrow field (<200 µm, 31% of RGCs), medium field (200–300 µm, 45%) and wide field (>300 µm, 24%). Dendritic ramification patterns of RGCs agree with the sub-lamina A/B rule. 34% of RGCs were monostratified, 24% bistratified and 42% diffusely stratified. 70% of ON RGCs and OFF RGCs were monostratified. Wide-field RGCs were diffusely stratified. 82% of RGCs generated light-evoked ON–OFF responses, while 11% generated ON responses and 7% OFF responses. Response sensitivity analysis suggested that some RGCs obtained separated rod/cone bipolar cell inputs whereas others obtained mixed bipolar cell inputs. 25% of neurons in the RGC layer were displaced amacrine cells. Although more types may be defined by more refined classification criteria, this report is to incorporate more physiological properties into RGC classification. PMID:26731645

  7. The shaping of information by visual metaphors.

    PubMed

    Ziemkiewicz, Caroline; Kosara, Robert

    2008-01-01

    The nature of an information visualization can be considered to lie in the visual metaphors it uses to structure information. The process of understanding a visualization therefore involves an interaction between these external visual metaphors and the user's internal knowledge representations. To investigate this claim, we conducted an experiment to test the effects of visual metaphor and verbal metaphor on the understanding of tree visualizations. Participants answered simple data comprehension questions while viewing either a treemap or a node-link diagram. Questions were worded to reflect a verbal metaphor that was either compatible or incompatible with the visualization a participant was using. The results (based on correctness and response time) suggest that the visual metaphor indeed affects how a user derives information from a visualization. Additionally, we found that the degree to which a user is affected by the metaphor is strongly correlated with the user's ability to answer task questions correctly. These findings are a first step towards illuminating how visual metaphors shape user understanding, and have significant implications for the evaluation, application, and theory of visualization.

  8. Obtaining information by dynamic (effortful) touching

    PubMed Central

    Turvey, M. T.; Carello, Claudia

    2011-01-01

    Dynamic touching is effortful touching. It entails deformation of muscles and fascia and activation of the embedded mechanoreceptors, as when an object is supported and moved by the body. It is realized as exploratory activities that can vary widely in spatial and temporal extents (a momentary heft, an extended walk). Research has revealed the potential of dynamic touching for obtaining non-visual information about the body (e.g. limb orientation), attachments to the body (e.g. an object's height and width) and the relation of the body both to attachments (e.g. hand's location on a grasped object) and surrounding surfaces (e.g. places and their distances). Invariants over the exploratory activity (e.g. moments of a wielded object's mass distribution) seem to ground this ‘information about’. The conception of a haptic medium as a nested tensegrity structure has been proposed to express the obtained information realized by myofascia deformation, by its invariants and transformations. The tensegrity proposal rationalizes the relative indifference of dynamic touch to the site of mechanical contact (hand, foot, torso or probe) and the overtness of exploratory activity. It also provides a framework for dynamic touching's fractal nature, and the finding that its degree of fractality may matter to its accomplishments. PMID:21969694

  9. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    PubMed Central

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (< 50 years) normally sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  10. Semi-Supervised Tensor-Based Graph Embedding Learning and Its Application to Visual Discriminant Tracking.

    PubMed

    Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen

    2017-01-01

    An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.

  11. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  12. [Assessment of the macula function by static perimetry, microperimetry and rarebit perimetry in patients suffering from dry age related macular degeneration].

    PubMed

    Nowomiejska, Katarzyna; Oleszczuk, Agnieszka; Zubilewicz, Anna; Krukowski, Jacek; Mańkowska, Anna; Rejdak, Robert; Zagórski, Zbigniew

    2007-01-01

    To compare the visual field results obtained by static perimetry, microperimetry and rabbit perimetry in patients suffering from dry age related macular degeneration (AMD). Fifteen eyes with dry AMD (hard or soft macula drusen and RPE disorders) were enrolled into the study. Static perimetry was performed using M2 macula program included in Octopus 101 instrument. Microperimetry was performed using macula program (14-2 threshold, 10dB) within 10 degrees of the central visual field. The fovea program within 4 degrees was used while performing rarebit perimetry. The mean sensitivity was significantly lower (p<0.001) during microperimetry (13.5 dB) comparing to static perimetry (26.7 dB). The mean deviation was significantly higher (p<0.001) during microperimetry (-6.32 dB) comparing to static perimetry (-3.11 dB). The fixation was unstable in 47% and eccentric in 40% while performing microperimetry. The median of the "mean hit rate" in rarebit perimetry was 90% (range 40-100%). The mean examination duration was 6.5 min. in static perimetry, 10.6 min. in microperimetry and 5,5 min. in rarebit perimetry (p<0.001). Sensitivity was 30%, 53% and 93% respectively. The visual field defects obtained by microperimetry were more pronounced than those obtained by static perimetry. Microperimetry was the most sensitive procedure although the most time-consuming. Microperimetry enables the control of the fixation position and stability, that is not possible using the remaining methods. Rarebit perimetry revealed slight reduction of the integrity of neural architecture of the retina. Microperimetry and rarebit perimetry provide more information in regard to the visual function than static perimetry, thus are the valuable method in the diagnosis of dry AMD.

  13. Color vision: "OH-site" rule for seeing red and green.

    PubMed

    Sekharan, Sivakumar; Katayama, Kota; Kandori, Hideki; Morokuma, Keiji

    2012-06-27

    Eyes gather information, and color forms an extremely important component of the information, more so in the case of animals to forage and navigate within their immediate environment. By using the ONIOM (QM/MM) (ONIOM = our own N-layer integrated molecular orbital plus molecular mechanics) method, we report a comprehensive theoretical analysis of the structure and molecular mechanism of spectral tuning of monkey red- and green-sensitive visual pigments. We show that interaction of retinal with three hydroxyl-bearing amino acids near the β-ionone ring part of the retinal in opsin, A164S, F261Y, and A269T, increases the electron delocalization, decreases the bond length alternation, and leads to variation in the wavelength of maximal absorbance of the retinal in the red- and green-sensitive visual pigments. On the basis of the analysis, we propose the "OH-site" rule for seeing red and green. This rule is also shown to account for the spectral shifts obtained from hydroxyl-bearing amino acids near the Schiff base in different visual pigments: at site 292 (A292S, A292Y, and A292T) in bovine and at site 111 (Y111) in squid opsins. Therefore, the OH-site rule is shown to be site-specific and not pigment-specific and thus can be used for tracking spectral shifts in any visual pigment.

  14. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    PubMed

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Situational analysis of communication of HIV and AIDS information to persons with visual impairment: a case of Kang'onga Production Centre in Ndola, Zambia.

    PubMed

    Chintende, Grace Nsangwe; Sitali, Doreen; Michelo, Charles; Mweemba, Oliver

    2017-04-04

    Despite the increases in health promotion and educational programs on HIV and AIDS, lack of information and communication on HIV and AIDS for the visually impaired persons continues. The underlying factors that create the information and communication gaps have not been fully explored in Zambia. It is therefore important that, this situational analysis on HIV and AIDS information dissemination to persons with visual impairments at Kang'onga Production Centre in Ndola was conducted. The study commenced in December 2014 to May 2015. A qualitative case study design was employed. The study used two focus group discussions with males and females. Each group comprised twelve participants. Eight in-depth interviews involving the visually impaired persons and five key informants working with visually impaired persons were conducted. Data was analysed thematically using NVIVO 8 software. Ethical clearance was sought from Excellency in Research Ethics and Science. Reference Number 2014-May-030. It was established that most visually impaired people lacked knowledge on the cause, transmission and treatment of HIV and AIDS resulting in misconceptions. It was revealed that health promoters and people working with the visually impaired did not have specific HIV and AIDS information programs in Zambia. Further, it was discovered that the media, information education communication and health education were channels through which the visually impaired accessed HIV and AIDS information. Discrimination, stigma, lack of employment opportunities, funding and poverty were among the many challenges identified which the visually impaired persons faced in accessing HIV and AIDS information. Integration of the visually impaired in HIV and AIDS programs would increase funding for economic empowerment and health promotions in order to improve communication on HIV and AIDS information. The study showed that, the visually impaired persons in Zambia are not catered for in the dissemination of HIV and AIDS information. Available information is not user-friendly because it is in unreadable formats thereby increasing the potential for misinformation and limitations to their access. This calls for innovations in the communication on HIV and AIDS information health promotion to the target groups.

  16. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.

  17. Information visualization: Beyond traditional engineering

    NASA Technical Reports Server (NTRS)

    Thomas, James J.

    1995-01-01

    This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.

  18. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  19. Enhancing astronaut performance using sensorimotor adaptability training

    PubMed Central

    Bloomberg, Jacob J.; Peters, Brian T.; Cohen, Helen S.; Mulavara, Ajitkumar P.

    2015-01-01

    Astronauts experience disturbances in balance and gait function when they return to Earth. The highly plastic human brain enables individuals to modify their behavior to match the prevailing environment. Subjects participating in specially designed variable sensory challenge training programs can enhance their ability to rapidly adapt to novel sensory situations. This is useful in our application because we aim to train astronauts to rapidly formulate effective strategies to cope with the balance and locomotor challenges associated with new gravitational environments—enhancing their ability to “learn to learn.” We do this by coupling various combinations of sensorimotor challenges with treadmill walking. A unique training system has been developed that is comprised of a treadmill mounted on a motion base to produce movement of the support surface during walking. This system provides challenges to gait stability. Additional sensory variation and challenge are imposed with a virtual visual scene that presents subjects with various combinations of discordant visual information during treadmill walking. This experience allows them to practice resolving challenging and conflicting novel sensory information to improve their ability to adapt rapidly. Information obtained from this work will inform the design of the next generation of sensorimotor countermeasures for astronauts. PMID:26441561

  20. Enhancing astronaut performance using sensorimotor adaptability training.

    PubMed

    Bloomberg, Jacob J; Peters, Brian T; Cohen, Helen S; Mulavara, Ajitkumar P

    2015-01-01

    Astronauts experience disturbances in balance and gait function when they return to Earth. The highly plastic human brain enables individuals to modify their behavior to match the prevailing environment. Subjects participating in specially designed variable sensory challenge training programs can enhance their ability to rapidly adapt to novel sensory situations. This is useful in our application because we aim to train astronauts to rapidly formulate effective strategies to cope with the balance and locomotor challenges associated with new gravitational environments-enhancing their ability to "learn to learn." We do this by coupling various combinations of sensorimotor challenges with treadmill walking. A unique training system has been developed that is comprised of a treadmill mounted on a motion base to produce movement of the support surface during walking. This system provides challenges to gait stability. Additional sensory variation and challenge are imposed with a virtual visual scene that presents subjects with various combinations of discordant visual information during treadmill walking. This experience allows them to practice resolving challenging and conflicting novel sensory information to improve their ability to adapt rapidly. Information obtained from this work will inform the design of the next generation of sensorimotor countermeasures for astronauts.

  1. Egocentric Direction and Position Perceptions are Dissociable Based on Only Static Lane Edge Information

    PubMed Central

    Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune

    2015-01-01

    When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895

  2. Photoacoustic tomography guided diffuse optical tomography for small-animal model

    NASA Astrophysics Data System (ADS)

    Wang, Yihan; Gao, Feng; Wan, Wenbo; Zhang, Yan; Li, Jiao

    2015-03-01

    Diffuse optical tomography (DOT) is a biomedical imaging technology for noninvasive visualization of spatial variation about the optical properties of tissue, which can be applied to in vivo small-animal disease model. However, traditional DOT suffers low spatial resolution due to tissue scattering. To overcome this intrinsic shortcoming, multi-modal approaches that incorporate DOT with other imaging techniques have been intensively investigated, where a priori information provided by the other modalities is normally used to reasonably regularize the inverse problem of DOT. Nevertheless, these approaches usually consider the anatomical structure, which is different from the optical structure. Photoacoustic tomography (PAT) is an emerging imaging modality that is particularly useful for visualizing lightabsorbing structures embedded in soft tissue with higher spatial resolution compared with pure optical imaging. Thus, we present a PAT-guided DOT approach, to obtain the location a priori information of optical structure provided by PAT first, and then guide DOT to reconstruct the optical parameters quantitatively. The results of reconstruction of phantom experiments demonstrate that both quantification and spatial resolution of DOT could be highly improved by the regularization of feasible-region information provided by PAT.

  3. Information Extraction of Tourist Geological Resources Based on 3d Visualization Remote Sensing Image

    NASA Astrophysics Data System (ADS)

    Wang, X.

    2018-04-01

    Tourism geological resources are of high value in admiration, scientific research and universal education, which need to be protected and rationally utilized. In the past, most of the remote sensing investigations of tourism geological resources used two-dimensional remote sensing interpretation method, which made it difficult for some geological heritages to be interpreted and led to the omission of some information. This aim of this paper is to assess the value of a method using the three-dimensional visual remote sensing image to extract information of geological heritages. skyline software system is applied to fuse the 0.36 m aerial images and 5m interval DEM to establish the digital earth model. Based on the three-dimensional shape, color tone, shadow, texture and other image features, the distribution of tourism geological resources in Shandong Province and the location of geological heritage sites were obtained, such as geological structure, DaiGu landform, granite landform, Volcanic landform, sandy landform, Waterscapes, etc. The results show that using this method for remote sensing interpretation is highly recognizable, making the interpretation more accurate and comprehensive.

  4. A fast fusion scheme for infrared and visible light images in NSCT domain

    NASA Astrophysics Data System (ADS)

    Zhao, Chunhui; Guo, Yunting; Wang, Yulei

    2015-09-01

    Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.

  5. Fusion of infrared and visible images based on BEMD and NSDFB

    NASA Astrophysics Data System (ADS)

    Zhu, Pan; Huang, Zhanhua; Lei, Hai

    2016-07-01

    This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.

  6. Visual judgements of steadiness in one-legged stance: reliability and validity.

    PubMed

    Haupstein, T; Goldie, P

    2000-01-01

    There is a paucity of information about the validity and reliability of clinicians' visual judgements of steadiness in one-legged stance. Such judgements are used frequently in clinical practice to support decisions about treatment in the fields of neurology, sports medicine, paediatrics and orthopaedics. The aim of the present study was to address the validity and reliability of visual judgements of steadiness in one-legged stance in a group of physiotherapists. A videotape of 20 five-second performances was shown to 14 physiotherapists with median clinical experience of 6.75 years. Validity of visual judgement was established by correlating scores obtained from an 11-point rating scale with criterion scores obtained from a force platform. In addition, partial correlations were used to control for the potential influence of body weight on the relationship between the visual judgements and criterion scores. Inter-observer reliability was quantified between the physiotherapists; intra-observer reliability was quantified between two tests four weeks apart. Mean criterion-related validity was high, regardless of whether body weight was controlled for statistically (Pearson's r = 0.84, 0.83, respectively). The standard error of estimating the criterion score was 3.3 newtons. Inter-observer reliability was high (ICC (2,1) = 0.81 at Test 1 and 0.82 at Test 2). Intra-observer reliability was high (on average ICC (2,1) = 0.88; Pearson's r = 0.90). The standard error of measurement for the 11-point scale was one unit. The finding of higher accuracy of making visual judgements than previously reported may be due to several aspects of design: use of a criterion score derived from the variability of the force signal which is more discriminating than variability of centre of pressure; use of a discriminating visual rating scale; specificity and clear definition of the phenomenon to be rated.

  7. The contribution of foveal and peripheral visual information to ensemble representation of face race.

    PubMed

    Jung, Wonmo; Bülthoff, Isabelle; Armann, Regine G M

    2017-11-01

    The brain can only attend to a fraction of all the information that is entering the visual system at any given moment. One way of overcoming the so-called bottleneck of selective attention (e.g., J. M. Wolfe, Võ, Evans, & Greene, 2011) is to make use of redundant visual information and extract summarized statistical information of the whole visual scene. Such ensemble representation occurs for low-level features of textures or simple objects, but it has also been reported for complex high-level properties. While the visual system has, for example, been shown to compute summary representations of facial expression, gender, or identity, it is less clear whether perceptual input from all parts of the visual field contributes equally to the ensemble percept. Here we extend the line of ensemble-representation research into the realm of race and look at the possibility that ensemble perception relies on weighting visual information differently depending on its origin from either the fovea or the visual periphery. We find that observers can judge the mean race of a set of faces, similar to judgments of mean emotion from faces and ensemble representations in low-level domains of visual processing. We also find that while peripheral faces seem to be taken into account for the ensemble percept, far more weight is given to stimuli presented foveally than peripherally. Whether this precision weighting of information stems from differences in the accuracy with which the visual system processes information across the visual field or from statistical inferences about the world needs to be determined by further research.

  8. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  9. Early auditory evoked potential is modulated by selective attention and related to individual differences in visual working memory capacity.

    PubMed

    Giuliano, Ryan J; Karns, Christina M; Neville, Helen J; Hillyard, Steven A

    2014-12-01

    A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual's capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70-90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals.

  10. The quality of visual information about the lower extremities influences visuomotor coordination during virtual obstacle negotiation.

    PubMed

    Kim, Aram; Kretch, Kari S; Zhou, Zixuan; Finley, James M

    2018-05-09

    Successful negotiation of obstacles during walking relies on the integration of visual information about the environment with ongoing locomotor commands. When information about the body and environment are removed through occlusion of the lower visual field, individuals increase downward head pitch angle, reduce foot placement precision, and increase safety margins during crossing. However, whether these effects are mediated by loss of visual information about the lower extremities, the obstacle, or both remains to be seen. Here, we used a fully immersive, virtual obstacle negotiation task to investigate how visual information about the lower extremities is integrated with information about the environment to facilitate skillful obstacle negotiation. Participants stepped over virtual obstacles while walking on a treadmill with one of three types of visual feedback about the lower extremities: no feedback, end-point feedback, or a link-segment model. We found that absence of visual information about the lower extremities led to an increase in the variability of leading foot placement after crossing. The presence of a visual representation of the lower extremities promoted greater downward head pitch angle during the approach to and subsequent crossing of an obstacle. In addition, having greater downward head pitch was associated with closer placement of the trailing foot to the obstacle, further placement of the leading foot after the obstacle, and higher trailing foot clearance. These results demonstrate that the fidelity of visual information about the lower extremities influences both feed-forward and feedback aspects of visuomotor coordination during obstacle negotiation.

  11. Behavior Selection of Mobile Robot Based on Integration of Multimodal Information

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kaneko, Masahide

    Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.

  12. Psychological distress and visual functioning in relation to vision-related disability in older individuals with cataracts.

    PubMed

    Walker, J G; Anstey, K J; Lord, S R

    2006-05-01

    To determine whether demographic, health status and psychological functioning measures, in addition to impaired visual acuity, are related to vision-related disability. Participants were 105 individuals (mean age=73.7 years) with cataracts requiring surgery and corrected visual acuity in the better eye of 6/24 to 6/36 were recruited from waiting lists at three public out-patient ophthalmology clinics. Visual disability was measured with the Visual Functioning-14 survey. Visual acuity was assessed using better and worse eye logMAR scores and the Melbourne Edge Test (MET) for edge contrast sensitivity. Data relating to demographic information, depression, anxiety and stress, health care and medication use and numbers of co-morbid conditions were obtained. Principal component analysis revealed four meaningful factors that accounted for 75% of the variance in visual disability: recreational activities, reading and fine work, activities of daily living and driving behaviour. Multiple regression analyses determined that visual acuity variables were the only significant predictors of overall vision-related functioning and difficulties with reading and fine work. For the remaining visual disability domains, non-visual factors were also significant predictors. Difficulties with recreational activities were predicted by stress, as well as worse eye visual acuity, and difficulties with activities of daily living were associated with self-reported health status, age and depression as well as MET contrast scores. Driving behaviour was associated with sex (with fewer women driving), depression, anxiety and stress scores, and MET contrast scores. Vision-related disability is common in older individuals with cataracts. In addition to visual acuity, demographic, psychological and health status factors influence the severity of vision-related disability, affecting recreational activities, activities of daily living and driving.

  13. Visual search strategies of experienced and nonexperienced swimming coaches.

    PubMed

    Moreno, Francisco J; Saavedra, José M; Sabido, Rafael; Luis, Vicente; Reina, Raúl

    2006-12-01

    The aim of this study consists of the application of an experimental protocol that allows information to be obtained about the visual search strategies elaborated by swimming coaches. 16 swimming coaches participated. The Experienced group (n=8) had 16.1 yr. (SD=8.2) of coaching experience and at least five years of experience in underwater vision. The Nonexperienced group in underwater vision (n= 8) had 4.2 yr. (SD= 4.0) of coaching experience. Participants were tested in a laboratory environment using a video-projected sample of the crawl stroke of an elite swimmer. This work discusses the main areas of the swimmer's body used by coaches to identify and analyse errors in technique from overhead and underwater perspectives. In front-underwater videos, body roll and mid-water were the locations of the display with higher percentages of fixation time. In the side-underwater slow videos, the upper body was the location with higher percentages of visual fixation time and was used to detect the low elbow fault. Side-overhead takes were not the best perspectives to pick up information directly about performance of the arms; coaches attended to the head as a reference for their visual search. The observation and technical analysis of the hands and arms were facilitated by an underwater perspective. Visual fixation on the elbow served as a reference to identify errors in the upper body. The side-underwater perspective may be an adequate way to identify correct knee angles in leg kicking and the alignment of a swimmer's body and leg actions.

  14. Visualizing Simulated Electrical Fields from Electroencephalography and Transcranial Electric Brain Stimulation: A Comparative Evaluation

    PubMed Central

    Eichelbaum, Sebastian; Dannhauer, Moritz; Hlawitschka, Mario; Brooks, Dana; Knösche, Thomas R.; Scheuermann, Gerik

    2014-01-01

    Electrical activity of neuronal populations is a crucial aspect of brain activity. This activity is not measured directly but recorded as electrical potential changes using head surface electrodes (electroencephalogram - EEG). Head surface electrodes can also be deployed to inject electrical currents in order to modulate brain activity (transcranial electric stimulation techniques) for therapeutic and neuroscientific purposes. In electroencephalography and noninvasive electric brain stimulation, electrical fields mediate between electrical signal sources and regions of interest (ROI). These fields can be very complicated in structure, and are influenced in a complex way by the conductivity profile of the human head. Visualization techniques play a central role to grasp the nature of those fields because such techniques allow for an effective conveyance of complex data and enable quick qualitative and quantitative assessments. The examination of volume conduction effects of particular head model parameterizations (e.g., skull thickness and layering), of brain anomalies (e.g., holes in the skull, tumors), location and extent of active brain areas (e.g., high concentrations of current densities) and around current injecting electrodes can be investigated using visualization. Here, we evaluate a number of widely used visualization techniques, based on either the potential distribution or on the current-flow. In particular, we focus on the extractability of quantitative and qualitative information from the obtained images, their effective integration of anatomical context information, and their interaction. We present illustrative examples from clinically and neuroscientifically relevant cases and discuss the pros and cons of the various visualization techniques. PMID:24821532

  15. When kinesthetic information is neglected in learning a Novel bimanual rhythmic coordination.

    PubMed

    Zhu, Qin; Mirich, Todd; Huang, Shaochen; Snapp-Childs, Winona; Bingham, Geoffrey P

    2017-08-01

    Many studies have shown that rhythmic interlimb coordination involves perception of the coupled limb movements, and different sensory modalities can be used. Using visual displays to inform the coupled bimanual movement, novel bimanual coordination patterns can be learned with practice. A recent study showed that similar learning occurred without vision when a coach provided manual guidance during practice. The information provided via the two different modalities may be same (amodal) or different (modality specific). If it is different, then learning with both is a dual task, and one source of information might be used in preference to the other in performing the task when both are available. In the current study, participants learned a novel 90° bimanual coordination pattern without or with visual information in addition to kinesthesis. In posttest, all participants were tested without and with visual information in addition to kinesthesis. When tested with visual information, all participants exhibited performance that was significantly improved by practice. When tested without visual information, participants who practiced using only kinesthetic information showed improvement, but those who practiced with visual information in addition showed remarkably less improvement. The results indicate that (1) the information is not amodal, (2) use of a single type of information was preferred, and (3) the preferred information was visual. We also hypothesized that older participants might be more likely to acquire dual task performance given their greater experience of the two sensory modes in combination, but results were replicated with both 20- and 50-year-olds.

  16. Thinking graphically: Connecting vision and cognition during graph comprehension.

    PubMed

    Ratwani, Raj M; Trafton, J Gregory; Boehm-Davis, Deborah A

    2008-03-01

    Task analytic theories of graph comprehension account for the perceptual and conceptual processes required to extract specific information from graphs. Comparatively, the processes underlying information integration have received less attention. We propose a new framework for information integration that highlights visual integration and cognitive integration. During visual integration, pattern recognition processes are used to form visual clusters of information; these visual clusters are then used to reason about the graph during cognitive integration. In 3 experiments, the processes required to extract specific information and to integrate information were examined by collecting verbal protocol and eye movement data. Results supported the task analytic theories for specific information extraction and the processes of visual and cognitive integration for integrative questions. Further, the integrative processes scaled up as graph complexity increased, highlighting the importance of these processes for integration in more complex graphs. Finally, based on this framework, design principles to improve both visual and cognitive integration are described. PsycINFO Database Record (c) 2008 APA, all rights reserved

  17. Visualization of risk of radiogenic second cancer in the organs and tissues of the human body.

    PubMed

    Zhang, Rui; Mirkovic, Dragan; Newhauser, Wayne D

    2015-04-28

    Radiogenic second cancer is a common late effect in long term cancer survivors. Currently there are few methods or tools available to visually evaluate the spatial distribution of risks of radiogenic late effects in the human body. We developed a risk visualization method and demonstrated it for radiogenic second cancers in tissues and organs of one patient treated with photon volumetric modulated arc therapy and one patient treated with proton craniospinal irradiation. Treatment plans were generated using radiotherapy treatment planning systems (TPS) and dose information was obtained from TPS. Linear non-threshold risk coefficients for organs at risk of second cancer incidence were taken from the Biological Effects of Ionization Radiation VII report. Alternative risk models including linear exponential model and linear plateau model were also examined. The predicted absolute lifetime risk distributions were visualized together with images of the patient anatomy. The risk distributions of second cancer for the two patients were visually presented. The risk distributions varied with tissue, dose, dose-risk model used, and the risk distribution could be similar to or very different from the dose distribution. Our method provides a convenient way to directly visualize and evaluate the risks of radiogenic second cancer in organs and tissues of the human body. In the future, visual assessment of risk distribution could be an influential determinant for treatment plan scoring.

  18. Imprinting modulates processing of visual information in the visual wulst of chicks.

    PubMed

    Maekawa, Fumihiko; Komine, Okiru; Sato, Katsushige; Kanamatsu, Tomoyuki; Uchimura, Motoaki; Tanaka, Kohichi; Ohki-Hamazaki, Hiroko

    2006-11-14

    Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium.

  19. Imprinting modulates processing of visual information in the visual wulst of chicks

    PubMed Central

    Maekawa, Fumihiko; Komine, Okiru; Sato, Katsushige; Kanamatsu, Tomoyuki; Uchimura, Motoaki; Tanaka, Kohichi; Ohki-Hamazaki, Hiroko

    2006-01-01

    Background Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. Results A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. Conclusion These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium. PMID:17101060

  20. Visual adaptation dominates bimodal visual-motor action adaptation

    PubMed Central

    de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.

    2016-01-01

    A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781

  1. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study

    PubMed Central

    2018-01-01

    Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962

  2. Development of an inexpensive optical method for studies of dental erosion process in vitro

    NASA Astrophysics Data System (ADS)

    Nasution, A. M. T.; Noerjanto, B.; Triwanto, L.

    2008-09-01

    Teeth have important roles in digestion of food, supporting the facial-structure, as well as in articulation of speech. Abnormality in teeth structure can be initiated by an erosion process due to diet or beverages consumption that lead to destruction which affect their functionality. Research to study the erosion processes that lead to teeth's abnormality is important in order to be used as a care and prevention purpose. Accurate measurement methods would be necessary as a research tool, in order to be capable for quantifying dental destruction's degree. In this work an inexpensive optical method as tool to study dental erosion process is developed. It is based on extraction the parameters from the 3D dental visual information. The 3D visual image is obtained from reconstruction of multiple lateral projection of 2D images that captured from many angles. Using a simple motor stepper and a pocket digital camera, sequence of multi-projection 2D images of premolar tooth is obtained. This images are then reconstructed to produce a 3D image, which is useful for quantifying related dental erosion parameters. The quantification process is obtained from the shrinkage of dental volume as well as surface properties due to erosion process. Results of quantification is correlated to the ones of dissolved calcium atom which released from the tooth using atomic absorption spectrometry. This proposed method would be useful as visualization tool in many engineering, dentistry, and medical research. It would be useful also for the educational purposes.

  3. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  4. A content analysis of visual cancer information: prevalence and use of photographs and illustrations in printed health materials.

    PubMed

    King, Andy J

    2015-01-01

    Researchers and practitioners have an increasing interest in visual components of health information and health communication messages. This study contributes to this evolving body of research by providing an account of the visual images and information featured in printed cancer communication materials. Using content analysis, 147 pamphlets and 858 images were examined to determine how frequently images are used in printed materials, what types of images are used, what information is conveyed visually, and whether or not current recommendations for the inclusion of visual content were being followed. Although visual messages were found to be common in printed health materials, existing recommendations about the inclusion of visual content were only partially followed. Results are discussed in terms of how relevant theoretical frameworks in the areas of behavior change and visual persuasion seem to be used in these materials, as well as how more theory-oriented research is necessary in visual messaging efforts.

  5. Information Visualization and Proposing New Interface for Movie Retrieval System (IMDB)

    ERIC Educational Resources Information Center

    Etemadpour, Ronak; Masood, Mona; Belaton, Bahari

    2010-01-01

    This research studies the development of a new prototype of visualization in support of movie retrieval. The goal of information visualization is unveiling of large amounts of data or abstract data set using visual presentation. With this knowledge the main goal is to develop a 2D presentation of information on movies from the IMDB (Internet Movie…

  6. Use of Visual Cues by Adults With Traumatic Brain Injuries to Interpret Explicit and Inferential Information.

    PubMed

    Brown, Jessica A; Hux, Karen; Knollman-Porter, Kelly; Wallace, Sarah E

    2016-01-01

    Concomitant visual and cognitive impairments following traumatic brain injuries (TBIs) may be problematic when the visual modality serves as a primary source for receiving information. Further difficulties comprehending visual information may occur when interpretation requires processing inferential rather than explicit content. The purpose of this study was to compare the accuracy with which people with and without severe TBI interpreted information in contextually rich drawings. Fifteen adults with and 15 adults without severe TBI. Repeated-measures between-groups design. Participants were asked to match images to sentences that either conveyed explicit (ie, main action or background) or inferential (ie, physical or mental inference) information. The researchers compared accuracy between participant groups and among stimulus conditions. Participants with TBI demonstrated significantly poorer accuracy than participants without TBI extracting information from images. In addition, participants with TBI demonstrated significantly higher response accuracy when interpreting explicit rather than inferential information; however, no significant difference emerged between sentences referencing main action versus background information or sentences providing physical versus mental inference information for this participant group. Difficulties gaining information from visual environmental cues may arise for people with TBI given their difficulties interpreting inferential content presented through the visual modality.

  7. Diversification of visual media retrieval results using saliency detection

    NASA Astrophysics Data System (ADS)

    Muratov, Oleg; Boato, Giulia; De Natale, Franesco G. B.

    2013-03-01

    Diversification of retrieval results allows for better and faster search. Recently there has been proposed different methods for diversification of image retrieval results mainly utilizing text information and techniques imported from natural language processing domain. However, images contain visual information that is impossible to describe in text and the use of visual features is inevitable. Visual saliency is information about the main object of an image implicitly included by humans while creating visual content. For this reason it is naturally to exploit this information for the task of diversification of the content. In this work we study whether visual saliency can be used for the task of diversification and propose a method for re-ranking image retrieval results using saliency. The evaluation has shown that the use of saliency information results in higher diversity of retrieval results.

  8. Information Technology and Transcription of Reading Materials for the Visually Impaired Persons in Nigeria

    ERIC Educational Resources Information Center

    Nkiko, Christopher; Atinmo, Morayo I.; Michael-Onuoha, Happiness Chijioke; Ilogho, Julie E.; Fagbohun, Michael O.; Ifeakachuku, Osinulu; Adetomiwa, Basiru; Usman, Kazeem Omeiza

    2018-01-01

    Studies have shown inadequate reading materials for the visually impaired in Nigeria. Information technology has greatly advanced the provision of information to the visually impaired in other industrialized climes. This study investigated the extent of application of information technology to the transcription of reading materials for the…

  9. Some Issues Concerning Access to Information by Blind and Partially Sighted Pupils.

    ERIC Educational Resources Information Center

    Green, Christopher F.

    This paper examines problems faced by visually-impaired secondary pupils in gaining access to information in print. The ever-increasing volume of information available inundates the sighted and is largely inaccessible in print format to the visually impaired. Important issues of availability for the visually impaired include whether information is…

  10. Domestic pigs' (Sus scrofa domestica) use of direct and indirect visual and auditory cues in an object choice task.

    PubMed

    Nawroth, Christian; von Borell, Eberhard

    2015-05-01

    Recently, foraging strategies have been linked to the ability to use indirect visual information. More selective feeders should express a higher aversion against losses compared to non-selective feeders and should therefore be more prone to avoid empty food locations. To extend these findings, in this study, we present a series of studies investigating the use of direct and indirect visual and auditory information by an omnivorous but selective feeder-the domestic pig. Subjects had to choose between two buckets, with only one containing a reward. Before making a choice, the subjects in Experiment 1 (N = 8) received full information regarding both the baited and non-baited location, either in a visual or auditory domain. In this experiment, the subjects were able to use visual but not auditory cues to infer the location of the reward spontaneously. Additionally, four individuals learned to use auditory cues after a period of training. In Experiment 2 (N = 8), the pigs were given different amounts of visual information about the content of the buckets-lifting either both of the buckets (full information), the baited bucket (direct information), the empty bucket (indirect information) or no bucket at all (no information). The subjects as a group were able to use direct and indirect visual cues. However, over the course of the experiment, the performance dropped to chance level when indirect information was provided. A final experiment (N = 3) provided preliminary results for pigs' use of indirect auditory information to infer the location of a reward. We conclude that pigs at a very young age are able to make decisions based on indirect information in the visual domain, whereas their performance in the use of indirect auditory information warrants further investigation.

  11. Beyond Control Panels: Direct Manipulation for Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Bradel, Lauren; North, Chris

    2013-07-19

    Information Visualization strives to provide visual representations through which users can think about and gain insight into information. By leveraging the visual and cognitive systems of humans, complex relationships and phenomena occurring within datasets can be uncovered by exploring information visually. Interaction metaphors for such visualizations are designed to enable users direct control over the filters, queries, and other parameters controlling how the data is visually represented. Through the evolution of information visualization, more complex mathematical and data analytic models are being used to visualize relationships and patterns in data – creating the field of Visual Analytics. However, the expectationsmore » for how users interact with these visualizations has remained largely unchanged – focused primarily on the direct manipulation of parameters of the underlying mathematical models. In this article we present an opportunity to evolve the methodology for user interaction from the direct manipulation of parameters through visual control panels, to interactions designed specifically for visual analytic systems. Instead of focusing on traditional direct manipulation of mathematical parameters, the evolution of the field can be realized through direct manipulation within the visual representation – where users can not only gain insight, but also interact. This article describes future directions and research challenges that fundamentally change the meaning of direct manipulation with regards to visual analytics, advancing the Science of Interaction.« less

  12. The Perceptual Root of Object-Based Storage: An Interactive Model of Perception and Visual Working Memory

    ERIC Educational Resources Information Center

    Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei

    2011-01-01

    Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…

  13. Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Hack, Zarita Caplan; Erber, Norman P.

    1982-01-01

    Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…

  14. A computer graphics system for visualizing spacecraft in orbit

    NASA Technical Reports Server (NTRS)

    Eyles, Don E.

    1989-01-01

    To carry out unanticipated operations with resources already in space is part of the rationale for a permanently manned space station in Earth orbit. The astronauts aboard a space station will require an on-board, spatial display tool to assist the planning and rehearsal of upcoming operations. Such a tool can also help astronauts to monitor and control such operations as they occur, especially in cases where first-hand visibility is not possible. A computer graphics visualization system designed for such an application and currently implemented as part of a ground-based simulation is described. The visualization system presents to the user the spatial information available in the spacecraft's computers by drawing a dynamic picture containing the planet Earth, the Sun, a star field, and up to two spacecraft. The point of view within the picture can be controlled by the user to obtain a number of specific visualization functions. The elements of the display, the methods used to control the display's point of view, and some of the ways in which the system can be used are described.

  15. Visual Phonetic Processing Localized Using Speech and Non-Speech Face Gestures in Video and Point-Light Displays

    PubMed Central

    Bernstein, Lynne E.; Jiang, Jintao; Pantazis, Dimitrios; Lu, Zhong-Lin; Joshi, Anand

    2011-01-01

    The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and non-speech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to non-speech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and non-speech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. PMID:20853377

  16. Temporary Blinding Limits versus Maximum Permissible Exposure - A Paradigm Change in Risk Assessment for Visible Optical Radiation

    NASA Astrophysics Data System (ADS)

    Reidenbach, Hans-Dieter

    Safety considerations in the field of laser radiation have traditionally been restricted to maximum permissible exposure levels defined as a function of wavelength and exposure duration. But in Europe according to the European Directive 2006/25/EC on artificial optical radiation the employer has to include in his risk assessment indirect effects from temporary blinding. Whereas sufficient knowledge on various deterministic risks exists, only sparse quantitative data is available for the impairment of visual functions due to temporary blinding from visible optical radiation. The consideration of indirect effects corresponds to a paradigm change in risk assessment when situations have to be treated, where intrabeam viewing of low-power laser radiation is likely or other non-coherent visible radiation might influence certain visual tasks. In order to obtain a sufficient basis for the assessment of certain situations, investigations of the functional relationships between wavelength, exposure time and optical power and the resulting interference on visual functions have been performed and the results are reported. The duration of a visual disturbance is thus predictable. In addition, preliminary information on protective measures is given.

  17. An information theory of image gathering

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.

    1991-01-01

    Shannon's mathematical theory of communication is extended to image gathering. Expressions are obtained for the total information that is received with a single image-gathering channel and with parallel channels. It is concluded that the aliased signal components carry information even though these components interfere with the within-passband components in conventional image gathering and restoration, thereby degrading the fidelity and visual quality of the restored image. An examination of the expression for minimum mean-square-error, or Wiener-matrix, restoration from parallel image-gathering channels reveals a method for unscrambling the within-passband and aliased signal components to restore spatial frequencies beyond the sampling passband out to the spatial frequency response cutoff of the optical aperture.

  18. 6th Yahya Cohen Lecture: visual experience during cataract surgery.

    PubMed

    Au Eong, K G

    2002-09-01

    The visual sensations many patients experience during cataract surgery under local anaesthesia have received little attention until recently. This paper reviews the recent studies on this phenomenon, discusses its clinical significance and suggests novel approaches to reduce its negative impact on the surgery. Literature review. Many patients who have cataract surgery under retrobulbar, peribulbar or topical anaesthesia experience a variety of visual sensations in their operated eye during surgery. These visual sensations include perception of light, movements, flashes, one or more colours, surgical instruments, the surgeon's hand/fingers, the surgeon and changes in light brightness. Some patients experience transient no light perception, even if the operation is performed under topical anaesthesia. The clinical significance of this phenomenon lies in the fact that approximately 7.1% to 15.4% of patients find their visual experience frightening. This fear and anxiety may cause some patients to become uncooperative during surgery and trigger a sympathetic surge, causing such undesirable effects as hypertension, tachycardia, ischaemic strain on the heart, hyperventilation and acute panic attack. Several approaches to reduce the negative impact of patients' visual experience are suggested, including appropriate preoperative counselling and reducing the ability of patients to see during surgery. The findings that some patients find their intraoperative visual experience distressing have a major impact on the way ophthalmologists manage their cataract patients. To reduce its negative impact, surgeons should consider incorporating appropriate preoperative counselling on potential intraoperative visual experience when obtaining informed consent for surgery.

  19. Weather dissemination and public usage

    NASA Technical Reports Server (NTRS)

    Stacey, M. S.

    1973-01-01

    The existing public usage of weather information was examined. A survey was conducted to substantiate the general public's needs for dissemination of current (0-12 hours) weather information, needs which, in a previous study, were found to be extensive and urgent. The goal of the study was to discover how the general public obtains weather information, what information they seek and why they seek it, to what use this information is put, and to further ascertain the public's attitudes and beliefs regarding weather reporting and the diffusion of weather information. Major findings from the study include: 1. The public has a real need for weather information in the 0-6 hour bracket. 2. The visual medium is preferred but due to the lack of frequent (0-6 hours) forecasts, the audio media only, i.e., telephone recordings and radio weathercasts, were more frequently used. 3. Weather information usage is sporadic.

  20. A lack of vision: evidence for poor communication of visual problems and support needs in education statements/plans for children with SEN.

    PubMed

    Little, J-A; Saunders, K J

    2015-02-01

    Visual dysfunction is more common in children with neurological impairments and previous studies have recommended such children receive visual and refractive assessment. In the UK, children with neurological impairment often have educational statementing for Special Educational Needs (SEN) and the statement should detail all health care and support needs to ensure the child's needs are met during school life. This study examined the representation of visual information in statements of SEN and compared this to orthoptic visual information from school visual assessments for children in a special school in Northern Ireland, UK. The parents of 115 school children in a special school were informed about the study via written information. Participation involved parents permitting the researchers to access their child's SEN educational statement and orthoptic clinical records. Statement information was accessed for 28 participants aged between four and 19 years; 25 contained visual information. Two participants were identified in their statements as having a certification of visual impairment. An additional 10 children had visual acuity ≥ 0.3 logMAR. This visual deficit was not reported in statements in eight out of these 12 cases (67%). 11 participants had significant refractive error and wore spectacles, but only five (45%) had this requirement recorded in their statement. Overall, 10 participants (55%) had either reduced visual acuity or significant refractive error which was not recorded in their statement. Despite additional visual needs being common, and described in clinical records, the majority of those with reduced vision and/or spectacle requirements did not have this information included in their statement. If visual limitations are not recognized by educational services, the child's needs may not be met during school life. More comprehensive eye care services, embedded with stakeholder communication and links to education are necessary to improve understanding of vision for children with neurological impairments. Copyright © 2014 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  1. How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation.

    PubMed

    Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul

    2016-02-01

    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.

  2. Securing information display by use of visual cryptography.

    PubMed

    Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo

    2003-09-01

    We propose a secure display technique based on visual cryptography. The proposed technique ensures the security of visual information. The display employs a decoding mask based on visual cryptography. Without the decoding mask, the displayed information cannot be viewed. The viewing zone is limited by the decoding mask so that only one person can view the information. We have developed a set of encryption codes to maintain the designed viewing zone and have demonstrated a display that provides a limited viewing zone.

  3. Comparison of Fully Automated Computer Analysis and Visual Scoring for Detection of Coronary Artery Disease from Myocardial Perfusion SPECT in a Large Population

    PubMed Central

    Arsanjani, Reza; Xu, Yuan; Hayes, Sean W.; Fish, Mathews; Lemley, Mark; Gerlach, James; Dorbala, Sharmila; Berman, Daniel S.; Germano, Guido; Slomka, Piotr

    2012-01-01

    We compared the performance of a fully automated quantification of attenuation-corrected (AC) and non-corrected (NC) myocardial perfusion single photon emission computed tomography (MPS) with the corresponding performance of experienced readers for the detection coronary artery disease (CAD). Methods 995 rest/stress 99mTc-sestamibi MPS studies, [650 consecutive cases with coronary angiography and 345 with likelihood of CAD < 5% (LLk)] were obtained by MPS with AC. Total perfusion deficit (TPD) for AC and NC data were compared to the visual summed stress and rest scores of 2 experienced readers. Visual reads were performed in 4 consecutive steps with the following information progressively revealed: NC data, AC+NC data, computer results, all clinical information. Results The diagnostic accuracy of TPD for detection of CAD was similar to both readers (NC: 82% vs. 84%, AC: 86% vs. 85–87% p = NS) with the exception of second reader when using clinical information (89%, p < 0.05). The Receiver-Operator-Characteristics Areas-Under-Curve (ROC-AUC) for TPD were significantly better than visual reads for NC (0.91 vs. 0.87 and 0.89, p < 0.01) and AC (0.92 vs. 0.90, p < 0.01), and it was comparable to visual reads incorporating all clinical information. Per-vessel accuracy of TPD was superior to one reader for NC (81% vs. 77%, p < 0.05) and AC (83% vs. 78%, p < 0.05) and equivalent to second reader [NC (79%) and AC (81%)]. Per-vessel ROC-AUC for NC (0.83) and AC (0.84) for TPD were better than (0.78–0.80 p < 0.01), and comparable to second reader (0.82–0.84, p = NS), for all steps. Conclusion For the detection of ≥ 70% stenosis based on angiographic criteria, a fully automated computer analysis of NC and AC MPS data is equivalent for per-patient and can be superior for per-vessel analysis, when compared to expert analysis. PMID:23315665

  4. Objective evaluation of the visual acuity in human eyes

    NASA Astrophysics Data System (ADS)

    Rosales, M. A.; López-Olazagasti, E.; Ramírez-Zavaleta, G.; Varillas, G.; Tepichín, E.

    2009-08-01

    Traditionally, the quality of the human vision is evaluated by a subjective test in which the examiner asks the patient to read a series of characters of different sizes, located at a certain distance of the patient. Typically, we need to ensure a subtended angle of vision of 5 minutes, which implies an object of 8.8 mm high located at 6 meters (normal or 20/20 visual acuity). These characters constitute what is known as the Snellen chart, universally used to evaluate the spatial resolution of the human eyes. The mentioned process of identification of characters is carried out by means of the eye - brain system, giving an evaluation of the subjective visual performance. In this work we consider the eye as an isolated image-forming system, and show that it is possible to isolate the function of the eye from that of the brain in this process. By knowing the impulse response of the eye´s system we can obtain, in advance, the image of the Snellen chart simultaneously. From this information, we obtain the objective performance of the eye as the optical system under test. This type of results might help to detect anomalous situations of the human vision, like the so called "cerebral myopia".

  5. FAST TRACK COMMUNICATION Determination of atomic site susceptibility tensors from neutron diffraction data on polycrystalline samples

    NASA Astrophysics Data System (ADS)

    Gukasov, A.; Brown, P. J.

    2010-12-01

    Polarized neutron diffraction can provide information about the atomic site susceptibility tensor χij characterizing the magnetic response of individual atoms to an external magnetic field (Gukasov and Brown 2002 J. Phys.: Condens. Mater. 14 8831). The six independent atomic susceptibility parameters (ASPs) can be determined from polarized neutron flipping ratio measurements on single crystals and visualized as magnetic ellipsoids which are analogous to the thermal ellipsoids obtained from atomic displacement parameters (ADPs). We demonstrate now that the information about local magnetic susceptibility at different magnetic sites in a crystal can also be obtained from polarized and unpolarized neutron diffraction measurements on magnetized powder samples. The validity of the method is illustrated by the results of such measurements on a polycrystalline sample of Tb2Sn2O7.

  6. Smell or vision? The use of different sensory modalities in predator discrimination.

    PubMed

    Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara

    2017-01-01

    Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.

  7. Visual Knowledge in Tactical Planning: Preliminary Knowledge Acquisition Phase 1 Technical Report

    DTIC Science & Technology

    1990-04-05

    MANAGEMENT INFORMATION , COMMUNICATIONS, AND COMPUTER SCIENCES Visual Knowledge in Tactical Planning: Preliminary Knowledge Acquisition Phase I Technical...perceived provides information in multiple modalities and, in fact, we may rely on a non-verbal mode for much of our understanding of the situation...some tasks, almost all the pertinent information is provided via diagrams, maps, znd other illustrations. Visual Knowledge Visual experience forms a

  8. Experience and information loss in auditory and visual memory.

    PubMed

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  9. The GRIDView Visualization Package

    NASA Astrophysics Data System (ADS)

    Kent, B. R.

    2011-07-01

    Large three-dimensional data cubes, catalogs, and spectral line archives are increasingly important elements of the data discovery process in astronomy. Visualization of large data volumes is of vital importance for the success of large spectral line surveys. Examples of data reduction utilizing the GRIDView software package are shown. The package allows users to manipulate data cubes, extract spectral profiles, and measure line properties. The package and included graphical user interfaces (GUIs) are designed with pipeline infrastructure in mind. The software has been used with great success analyzing spectral line and continuum data sets obtained from large radio survey collaborations. The tools are also important for multi-wavelength cross-correlation studies and incorporate Virtual Observatory client applications for overlaying database information in real time as cubes are examined by users.

  10. The magnifying glass - A feature space local expansion for visual analysis. [and image enhancement

    NASA Technical Reports Server (NTRS)

    Juday, R. D.

    1981-01-01

    The Magnifying Glass Transformation (MGT) technique is proposed, as a multichannel spectral operation yielding visual imagery which is enhanced in a specified spectral vicinity, guided by the statistics of training samples. An application example is that in which the discrimination among spectral neighbors within an interactive display may be increased without altering distant object appearances or overall interpretation. A direct histogram specification technique is applied to the channels within the multispectral image so that a subset of the spectral domain occupies an increased fraction of the domain. The transformation is carried out by obtaining the training information, establishing the condition of the covariance matrix, determining the influenced solid, and initializing the lookup table. Finally, the image is transformed.

  11. IUE observations of symbiotic stars

    NASA Technical Reports Server (NTRS)

    Hack, M.

    1982-01-01

    The main photometric and spectroscopic characteristics in the ultraviolet and visual range of the most extensively studied symbiotic stars are reviewed. The main data obtained with IUE concern: (1) the determination of the shape of the UV continuum, which, in some cases, proves without doubt the presence of a hot companion; and the determination of the interstellar extinction by means of the lambda 2200 feature; (2) the measurement of emission lines, which enables us to derive the electron temperature and density of the circumstellar envelope, and, taken together with those lines observed in the visual, give more complete information on which spectroscopic mechanisms operate in the envelope; (3) the observation of absorption lines in the UV, which are present in just a few cases.

  12. High visual resolution matters in audiovisual speech perception, but only for some.

    PubMed

    Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G

    2016-07-01

    The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

  13. Influences of selective adaptation on perception of audiovisual speech

    PubMed Central

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  14. Visual working memory capacity for color is independent of representation resolution.

    PubMed

    Ye, Chaoxiong; Zhang, Lingcong; Liu, Taosheng; Li, Hong; Liu, Qiang

    2014-01-01

    The relationship between visual working memory (VWM) capacity and resolution of representation have been extensively investigated. Several recent ERP studies using orientation (or arrow) stimuli suggest that there is an inverse relationship between VWM capacity and representation resolution. However, different results have been obtained in studies using color stimuli. This could be due to important differences in the experimental paradigms used in previous studies. We examined whether the same relationship between capacity and resolution holds for color information. Participants performed a color change detection task while their electroencephalography was recorded. We manipulated representation resolution by asking participants to detect either a salient change (low-resolution) or a subtle change (high-resolution) in color. We used an ERP component known as contralateral delay activity (CDA) to index the amount of information maintained in VWM. The result demonstrated the same pattern for both low- and high-resolution conditions, with no difference between conditions. This result suggests that VWM always represents a fixed number of approximately 3-4 colors regardless of the resolution of representation.

  15. PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features

    PubMed Central

    Zhao, Ji; Guo, Yue; He, Wenhao; Yuan, Kui

    2018-01-01

    To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactness of a 3D spatial line, Plücker coordinates and orthonormal representation for the line are employed. To tightly and efficiently fuse the information from inertial measurement units (IMUs) and visual sensors, we optimize the states by minimizing a cost function which combines the pre-integrated IMU error term together with the point and line re-projection error terms in a sliding window optimization framework. The experiments evaluated on public datasets demonstrate that the PL-VIO method that combines point and line features outperforms several state-of-the-art VIO systems which use point features only. PMID:29642648

  16. Neuro-inspired smart image sensor: analog Hmax implementation

    NASA Astrophysics Data System (ADS)

    Paindavoine, Michel; Dubois, Jérôme; Musa, Purnawarman

    2015-03-01

    Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.

  17. Simultaneous reconstruction of 3D refractive index, temperature, and intensity distribution of combustion flame by double computed tomography technologies based on spatial phase-shifting method

    NASA Astrophysics Data System (ADS)

    Guo, Zhenyan; Song, Yang; Yuan, Qun; Wulan, Tuya; Chen, Lei

    2017-06-01

    In this paper, a transient multi-parameter three-dimensional (3D) reconstruction method is proposed to diagnose and visualize a combustion flow field. Emission and transmission tomography based on spatial phase-shifted technology are combined to reconstruct, simultaneously, the various physical parameter distributions of a propane flame. Two cameras triggered by the internal trigger mode capture the projection information of the emission and moiré tomography, respectively. A two-step spatial phase-shifting method is applied to extract the phase distribution in the moiré fringes. By using the filtered back-projection algorithm, we reconstruct the 3D refractive-index distribution of the combustion flow field. Finally, the 3D temperature distribution of the flame is obtained from the refractive index distribution using the Gladstone-Dale equation. Meanwhile, the 3D intensity distribution is reconstructed based on the radiation projections from the emission tomography. Therefore, the structure and edge information of the propane flame are well visualized.

  18. Sensible Success

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Commercial remote sensing uses satellite imagery to provide valuable information about the planet's features. By capturing light reflected from the Earth's surface with cameras or sensor systems, usually mounted on an orbiting satellite, data is obtained for business enterprises with an interest in land feature distribution. Remote sensing is practical when applied to large-area coverage, such as agricultural monitoring, regional mapping, environmental assessment, and infrastructure planning. For example, cellular service providers use satellite imagery to select the most ideal location for a communication tower. Crowsey Incorporated has the ability to use remote sensing capabilities to conduct spatial geographic visualizations and other remote-sensing services. Presently, the company has found a demand for these services in the area of litigation support. By using spatial information and analyses, Crowsey helps litigators understand and visualize complex issues and then to communicate a clear argument, with complete indisputable evidence. Crowsey Incorporated is a proud partner in NASA's Mississippi Space Commerce Initiative, with research offices at the John C. Stennis Space Center.

  19. Multiscale infrared and visible image fusion using gradient domain guided image filtering

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia

    2018-03-01

    For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.

  20. Langrangian model of nitrogen kinetics in the Chattahoochee river

    USGS Publications Warehouse

    Jobson, H.E.

    1987-01-01

    A Lagrangian reference frame is used to solve the convection-dispersion equation and interpret water-quality obtained from the Chattahoochee River. The model was calibrated using unsteady concentrations of organic nitrogen, ammonia, and nitrite plus nitrate obtained during June 1977 and verified using data obtained during August 1976. Reaction kinetics of the cascade type are shown to provide a reasonable description of the nitrogen-species processes in the Chattahoochee River. The conceptual model is easy to visualize in the physical sense and the output includes information that is not easily determined from an Eulerian approach, but which is very helpful in model calibration and data interpretation. For example, the model output allows one to determine which data are of most value in model calibration or verification.

  1. What Drives Memory-Driven Attentional Capture? The Effects of Memory Type, Display Type, and Search Type

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.

    2009-01-01

    An important question is whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. Some past research has indicated that they do: Singleton distractors interfered more strongly with a visual search task when they…

  2. Supporting Visual Literacy in the School Library Media Center: Developmental, Socio-Cultural, and Experiential Considerations and Scenarios

    ERIC Educational Resources Information Center

    Cooper, Linda Z.

    2008-01-01

    Children are natural visual learners--they have been absorbing information visually since birth. They welcome opportunities to learn via images as well as to generate visual information themselves, and these opportunities present themselves every day. The importance of visual literacy can be conveyed through conversations and the teachable moment,…

  3. Information processing in the primate visual system - An integrated systems perspective

    NASA Technical Reports Server (NTRS)

    Van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  4. Information Virtulization in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Virtual Environments provide a natural setting for a wide range of information visualization applications, particularly wlieit the information to be visualized is defined on a three-dimensional domain (Bryson, 1996). This chapter provides an overview of the issues that arise when designing and implementing an information visualization application in a virtual environment. Many design issues that arise, such as, e.g., issues of display, user tracking are common to any application of virtual environments. In this chapter we focus on those issues that are special to information visualization applications, as issues of wider concern are addressed elsewhere in this book.

  5. Processing reafferent and exafferent visual information for action and perception.

    PubMed

    Reichenbach, Alexandra; Diedrichsen, Jörn

    2015-01-01

    A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

  6. Guidance of attention by information held in working memory.

    PubMed

    Calleja, Marissa Ortiz; Rich, Anina N

    2013-05-01

    Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object's image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object's image. The same pattern of results held when the target was invariant (Exps. 2-3) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.

  7. Perceived visual informativeness (PVI): construct and scale development to assess visual information in printed materials.

    PubMed

    King, Andy J; Jensen, Jakob D; Davis, LaShara A; Carcioppolo, Nick

    2014-01-01

    There is a paucity of research on the visual images used in health communication messages and campaign materials. Even though many studies suggest further investigation of these visual messages and their features, few studies provide specific constructs or assessment tools for evaluating the characteristics of visual messages in health communication contexts. The authors conducted 2 studies to validate a measure of perceived visual informativeness (PVI), a message construct assessing visual messages presenting statistical or indexical information. In Study 1, a 7-item scale was created that demonstrated good internal reliability (α = .91), as well as convergent and divergent validity with related message constructs such as perceived message quality, perceived informativeness, and perceived attractiveness. PVI also converged with a preference for visual learning but was unrelated to a person's actual vision ability. In addition, PVI exhibited concurrent validity with a number of important constructs including perceived message effectiveness, decisional satisfaction, and three key public health theory behavior predictors: perceived benefits, perceived barriers, and self-efficacy. Study 2 provided more evidence that PVI is an internally reliable measure and demonstrates that PVI is a modifiable message feature that can be tested in future experimental work. PVI provides an initial step to assist in the evaluation and testing of visual messages in campaign and intervention materials promoting informed decision making and behavior change.

  8. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study discusses the advantages and disadvantages of these three approaches and their integration in multiple domains, such as web-based 3D city modelling, tourism and architectural 3D visualization. It was concluded that image-based modelling and panoramic visualisation are simple, fast and effective techniques suitable for simultaneous virtual representation of many objects. However, additional measurements or CAD information will be beneficial for obtaining higher accuracy.

  9. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  10. The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.

    PubMed

    Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin

    2017-01-18

    Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.

  11. Visual and haptic integration in the estimation of softness of deformable objects

    PubMed Central

    Cellini, Cristiano; Kaim, Lukas; Drewing, Knut

    2013-01-01

    Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme. PMID:25165510

  12. Mandarin Visual Speech Information

    ERIC Educational Resources Information Center

    Chen, Trevor H.

    2010-01-01

    While the auditory-only aspects of Mandarin speech are heavily-researched and well-known in the field, this dissertation addresses its lesser-known aspects: The visual and audio-visual perception of Mandarin segmental information and lexical-tone information. Chapter II of this dissertation focuses on the audiovisual perception of Mandarin…

  13. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

  14. Decoding information about dynamically occluded objects in visual cortex

    PubMed Central

    Erlikhman, Gennady; Caplovitz, Gideon P.

    2016-01-01

    During dynamic occlusion, an object passes behind an occluding surface and then later reappears. Even when completely occluded from view, such objects are experienced as continuing to exist or persist behind the occluder, even though they are no longer visible. The contents and neural basis of this persistent representation remain poorly understood. Questions remain as to whether there is information maintained about the object itself (i.e. its shape or identity) or, non-object-specific information such as its position or velocity as it is tracked behind an occluder as well as which areas of visual cortex represent such information. Recent studies have found that early visual cortex is activated by “invisible” objects during visual imagery and by unstimulated regions along the path of apparent motion, suggesting that some properties of dynamically occluded objects may also be neurally represented in early visual cortex. We applied functional magnetic resonance imaging in human subjects to examine the representation of information within visual cortex during dynamic occlusion. For gradually occluded, but not for instantly disappearing objects, there was an increase in activity in early visual cortex (V1, V2, and V3). This activity was spatially-specific, corresponding to the occluded location in the visual field. However, the activity did not encode enough information about object identity to discriminate between different kinds of occluded objects (circles vs. stars) using MVPA. In contrast, object identity could be decoded in spatially-specific subregions of higher-order, topographically organized areas such as ventral, lateral, and temporal occipital areas (VO, LO, and TO) as well as the functionally defined LOC and hMT+. These results suggest that early visual cortex may represent the dynamically occluded object’s position or motion path, while later visual areas represent object-specific information. PMID:27663987

  15. Linear programming model to construct phylogenetic network for 16S rRNA sequences of photosynthetic organisms and influenza viruses.

    PubMed

    Mathur, Rinku; Adlakha, Neeru

    2014-06-01

    Phylogenetic trees give the information about the vertical relationships of ancestors and descendants but phylogenetic networks are used to visualize the horizontal relationships among the different organisms. In order to predict reticulate events there is a need to construct phylogenetic networks. Here, a Linear Programming (LP) model has been developed for the construction of phylogenetic network. The model is validated by using data sets of chloroplast of 16S rRNA sequences of photosynthetic organisms and Influenza A/H5N1 viruses. Results obtained are in agreement with those obtained by earlier researchers.

  16. Effects of body lean and visual information on the equilibrium maintenance during stance.

    PubMed

    Duarte, Marcos; Zatsiorsky, Vladimir M

    2002-09-01

    Maintenance of equilibrium was tested in conditions when humans assume different leaning postures during upright standing. Subjects ( n=11) stood in 13 different body postures specified by visual center of pressure (COP) targets within their base of support (BOS). Different types of visual information were tested: continuous presentation of visual target, no vision after target presentation, and with simultaneous visual feedback of the COP. The following variables were used to describe the equilibrium maintenance: the mean of the COP position, the area of the ellipse covering the COP sway, and the resultant median frequency of the power spectral density of the COP displacement. The variability of the COP displacement, quantified by the COP area variable, increased when subjects occupied leaning postures, irrespective of the kind of visual information provided. This variability also increased when vision was removed in relation to when vision was present. Without vision, drifts in the COP data were observed which were larger for COP targets farther away from the neutral position. When COP feedback was given in addition to the visual target, the postural control system did not control stance better than in the condition with only visual information. These results indicate that the visual information is used by the postural control system at both short and long time scales.

  17. Attachment affects social information processing: Specific electrophysiological effects of maternal stimuli.

    PubMed

    Wu, Lili; Gu, Ruolei; Zhang, Jianxin

    2016-01-01

    Attachment is critical to each individual. It affects the cognitive-affective processing of social information. The present study examines how attachment affects the processing of social information, specifically maternal information. We assessed the behavioral and electrophysiological responses to maternal information (compared to non-specific others) in a Go/No-go Association Task (GNAT) with 22 participants. The results illustrated that attachment affected maternal information processing during three sequential stages of information processing. First, attachment affected visual perception, reflected by enhanced P100 and N170 elicited by maternal information as compared to others information. Second, compared to others, mother obtained more attentional resources, reflected by faster behavioral response to maternal information and larger P200 and P300. Finally, mother was evaluated positively, reflected by shorter P300 latency in a mother + good condition as compared to a mother + bad condition. These findings indicated that the processing of attachment-relevant information is neurologically differentiated from other types of social information from an early stage of perceptual processing to late high-level processing.

  18. Can Visualizing Document Space Improve Users' Information Foraging?

    ERIC Educational Resources Information Center

    Song, Min

    1998-01-01

    This study shows how users access relevant information in a visualized document space and determine whether BiblioMapper, a visualization tool, strengthens an information retrieval (IR) system and makes it more usable. BiblioMapper, developed for a CISI collection, was evaluated by accuracy, time, and user satisfaction. Users' navigation…

  19. Assessment of visual communication by information theory

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.

    1994-01-01

    This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

  20. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  1. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear.

    PubMed

    Willems, Roel M; Clevis, Krien; Hagoort, Peter

    2011-09-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.

  2. Multi-Voxel Decoding and the Topography of Maintained Information During Visual Working Memory

    PubMed Central

    Lee, Sue-Hyun; Baker, Chris I.

    2016-01-01

    The ability to maintain representations in the absence of external sensory stimulation, such as in working memory, is critical for guiding human behavior. Human functional brain imaging studies suggest that visual working memory can recruit a network of brain regions from visual to parietal to prefrontal cortex. In this review, we focus on the maintenance of representations during visual working memory and discuss factors determining the topography of those representations. In particular, we review recent studies employing multi-voxel pattern analysis (MVPA) that demonstrate decoding of the maintained content in visual cortex, providing support for a “sensory recruitment” model of visual working memory. However, there is some evidence that maintained content can also be decoded in areas outside of visual cortex, including parietal and frontal cortex. We suggest that the ability to maintain representations during working memory is a general property of cortex, not restricted to specific areas, and argue that it is important to consider the nature of the information that must be maintained. Such information-content is critically determined by the task and the recruitment of specific regions during visual working memory will be both task- and stimulus-dependent. Thus, the common finding of maintained information in visual, but not parietal or prefrontal, cortex may be more of a reflection of the need to maintain specific types of visual information and not of a privileged role of visual cortex in maintenance. PMID:26912997

  3. Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies.

    PubMed

    van Hoesel, Richard J M

    2015-04-01

    One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.

  4. Amplitude interpretation and visualization of three-dimensional reflection data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enachescu, M.E.

    1994-07-01

    Digital recording and processing of modern three-dimensional surveys allow for relative good preservation and correct spatial positioning of seismic reflection amplitude. A four-dimensional seismic reflection field matrix R (x,y,t,A), which can be computer visualized (i.e., real-time interactively rendered, edited, and animated), is now available to the interpreter. The amplitude contains encoded geological information indirectly related to lithologies and reservoir properties. The magnitude of the amplitude depends not only on the acoustic impedance contrast across a boundary, but is also strongly affected by the shape of the reflective boundary. This allows the interpreter to image subtle tectonic and structural elements notmore » obvious on time-structure maps. The use of modern workstations allows for appropriate color coding of the total available amplitude range, routine on-screen time/amplitude extraction, and late display of horizon amplitude maps (horizon slices) or complex amplitude-structure spatial visualization. Stratigraphic, structural, tectonic, fluid distribution, and paleogeographic information are commonly obtained by displaying the amplitude variation A = A(x,y,t) associated with a particular reflective surface or seismic interval. As illustrated with several case histories, traditional structural and stratigraphic interpretation combined with a detailed amplitude study generally greatly enhance extraction of subsurface geological information from a reflection data volume. In the context of three-dimensional seismic surveys, the horizon amplitude map (horizon slice), amplitude attachment to structure and [open quotes]bright clouds[close quotes] displays are very powerful tools available to the interpreter.« less

  5. Multilevel analysis of sports video sequences

    NASA Astrophysics Data System (ADS)

    Han, Jungong; Farin, Dirk; de With, Peter H. N.

    2006-01-01

    We propose a fully automatic and flexible framework for analysis and summarization of tennis broadcast video sequences, using visual features and specific game-context knowledge. Our framework can analyze a tennis video sequence at three levels, which provides a broad range of different analysis results. The proposed framework includes novel pixel-level and object-level tennis video processing algorithms, such as a moving-player detection taking both the color and the court (playing-field) information into account, and a player-position tracking algorithm based on a 3-D camera model. Additionally, we employ scene-level models for detecting events, like service, base-line rally and net-approach, based on a number real-world visual features. The system can summarize three forms of information: (1) all court-view playing frames in a game, (2) the moving trajectory and real-speed of each player, as well as relative position between the player and the court, (3) the semantic event segments in a game. The proposed framework is flexible in choosing the level of analysis that is desired. It is effective because the framework makes use of several visual cues obtained from the real-world domain to model important events like service, thereby increasing the accuracy of the scene-level analysis. The paper presents attractive experimental results highlighting the system efficiency and analysis capabilities.

  6. Extrafoveal preview benefit during free-viewing visual search in the monkey

    PubMed Central

    Krishna, B. Suresh; Ipata, Anna E.; Bisley, James W.; Gottlieb, Jacqueline; Goldberg, Michael E.

    2014-01-01

    Abstract Previous studies have shown that subjects require less time to process a stimulus at the fovea after a saccade if they have viewed the same stimulus in the periphery immediately prior to the saccade. This extrafoveal preview benefit indicates that information about the visual form of an extrafoveally viewed stimulus can be transferred across a saccade. Here, we extend these findings by demonstrating and characterizing a similar extrafoveal preview benefit in monkeys during a free-viewing visual search task. We trained two monkeys to report the orientation of a target among distractors by releasing one of two bars with their hand; monkeys were free to move their eyes during the task. Both monkeys took less time to indicate the orientation of the target after foveating it, when the target lay closer to the fovea during the previous fixation. An extrafoveal preview benefit emerged even if there was more than one intervening saccade between the preview and the target fixation, indicating that information about target identity could be transferred across more than one saccade and could be obtained even if the search target was not the goal of the next saccade. An extrafoveal preview benefit was also found for distractor stimuli. These results aid future physiological investigations of the extrafoveal preview benefit. PMID:24403392

  7. Designing human centered GeoVisualization application--the SanaViz--for telehealth users: a case study.

    PubMed

    Joshi, Ashish; de Araujo Novaes, Magdala; Machiavelli, Josiane; Iyengar, Sriram; Vogler, Robert; Johnson, Craig; Zhang, Jiajie; Hsu, Chiehwen E

    2012-01-01

    Public health data is typically organized by geospatial unit. GeoVisualization (GeoVis) allows users to see information visually on a map. Examine telehealth users' perceptions towards existing public health GeoVis applications and obtains users' feedback about features important for the design and development of Human Centered GeoVis application "the SanaViz". We employed a cross sectional study design using mixed methods approach for this pilot study. Twenty users involved with the NUTES telehealth center at Federal University of Pernambuco (UFPE), Recife, Brazil were enrolled. Open and closed ended questionnaires were used to gather data. We performed audio recording for the interviews. Information gathered included socio-demographics, prior spatial skills and perception towards use of GeoVis to evaluate telehealth services. Card sorting and sketching methods were employed. Univariate analysis was performed for the continuous and categorical variables. Qualitative analysis was performed for open ended questions. Existing Public Health GeoVis applications were difficult to use. Results found interaction features zooming, linking and brushing and representation features Google maps, tables and bar chart as most preferred GeoVis features. Early involvement of users is essential to identify features necessary to be part of the human centered GeoVis application "the SanaViz".

  8. Locating Temporal Functional Dynamics of Visual Short-Term Memory Binding using Graph Modular Dirichlet Energy

    NASA Astrophysics Data System (ADS)

    Smith, Keith; Ricaud, Benjamin; Shahid, Nauman; Rhodes, Stephen; Starr, John M.; Ibáñez, Augustin; Parra, Mario A.; Escudero, Javier; Vandergheynst, Pierre

    2017-02-01

    Visual short-term memory binding tasks are a promising early marker for Alzheimer’s disease (AD). To uncover functional deficits of AD in these tasks it is meaningful to first study unimpaired brain function. Electroencephalogram recordings were obtained from encoding and maintenance periods of tasks performed by healthy young volunteers. We probe the task’s transient physiological underpinnings by contrasting shape only (Shape) and shape-colour binding (Bind) conditions, displayed in the left and right sides of the screen, separately. Particularly, we introduce and implement a novel technique named Modular Dirichlet Energy (MDE) which allows robust and flexible analysis of the functional network with unprecedented temporal precision. We find that connectivity in the Bind condition is less integrated with the global network than in the Shape condition in occipital and frontal modules during the encoding period of the right screen condition. Using MDE we are able to discern driving effects in the occipital module between 100-140 ms, coinciding with the P100 visually evoked potential, followed by a driving effect in the frontal module between 140-180 ms, suggesting that the differences found constitute an information processing difference between these modules. This provides temporally precise information over a heterogeneous population in promising tasks for the detection of AD.

  9. Are face representations depth cue invariant?

    PubMed

    Dehmoobadsharifabadi, Armita; Farivar, Reza

    2016-06-01

    The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.

  10. Social media and its dual use in biopreparedness: communication and visualization tools in an animal bioterrorism incident.

    PubMed

    Sjöberg, Elisabeth; Barker, Gary C; Landgren, Jonas; Griberg, Isaac; Skiby, Jeffrey E; Tubbin, Anna; von Stapelmohr, Anne; Härenstam, Malin; Jansson, Mikael; Knutsson, Rickard

    2013-09-01

    This article focuses on social media and interactive challenges for emergency organizations during a bioterrorism or agroterrorism incident, and it outlines the dual-use dilemma of social media. Attackers or terrorists can use social media as their modus operandi, and defenders, including emergency organizations in law enforcement and public and animal health, can use it for peaceful purposes. To get a better understanding of the uses of social media in these situations, a workshop was arranged in Stockholm, Sweden, to raise awareness about social media and animal bioterrorism threats. Fifty-six experts and crisis communicators from international and national organizations participated. As a result of the workshop, it was concluded that emergency organizations can collect valuable information and monitor social media before, during, and after an outbreak. In order to make use of interactive communication to obtain collective intelligence from the public, emergency organizations must adapt to social networking technologies, requiring multidisciplinary knowledge in the fields of information, communication, IT, and biopreparedness. Social network messaging during a disease outbreak can be visualized in stream graphs and networks showing clusters of Twitter and Facebook users. The visualization of social media can be an important preparedness tool in the response to bioterrorism and agroterrorism.

  11. Semantic integration of differently asynchronous audio-visual information in videos of real-world events in cognitive processing: an ERP study.

    PubMed

    Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang

    2011-07-01

    In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. [Design and implementation of online statistical analysis function in information system of air pollution and health impact monitoring].

    PubMed

    Lü, Yiran; Hao, Shuxin; Zhang, Guoqing; Liu, Jie; Liu, Yue; Xu, Dongqun

    2018-01-01

    To implement the online statistical analysis function in information system of air pollution and health impact monitoring, and obtain the data analysis information real-time. Using the descriptive statistical method as well as time-series analysis and multivariate regression analysis, SQL language and visual tools to implement online statistical analysis based on database software. Generate basic statistical tables and summary tables of air pollution exposure and health impact data online; Generate tendency charts of each data part online and proceed interaction connecting to database; Generate butting sheets which can lead to R, SAS and SPSS directly online. The information system air pollution and health impact monitoring implements the statistical analysis function online, which can provide real-time analysis result to its users.

  13. A Prototype Search Toolkit

    NASA Astrophysics Data System (ADS)

    Knepper, Margaret M.; Fox, Kevin L.; Frieder, Ophir

    Information overload is now a reality. We no longer worry about obtaining a sufficient volume of data; we now are concerned with sifting and understanding the massive volumes of data available to us. To do so, we developed an integrated information processing toolkit that provides the user with a variety of ways to view their information. The views include keyword search results, a domain specific ranking system that allows for adaptively capturing topic vocabularies to customize and focus the search results, navigation pages for browsing, and a geospatial and temporal component to visualize results in time and space, and provide “what if” scenario playing. Integrating the information from different tools and sources gives the user additional information and another way to analyze the data. An example of the integration is illustrated on reports of the avian influenza (bird flu).

  14. The Ecological Approach to Text Visualization.

    ERIC Educational Resources Information Center

    Wise, James A.

    1999-01-01

    Presents both theoretical and technical bases on which to build a "science of text visualization." The Spatial Paradigm for Information Retrieval and Exploration (SPIRE) text-visualization system, which images information from free-text documents as natural terrains, serves as an example of the "ecological approach" in its visual metaphor, its…

  15. Visual homing with a pan-tilt based stereo camera

    NASA Astrophysics Data System (ADS)

    Nirmal, Paramesh; Lyons, Damian M.

    2013-01-01

    Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.

  16. iTTVis: Interactive Visualization of Table Tennis Data.

    PubMed

    Wu, Yingcai; Lan, Ji; Shu, Xinhuan; Ji, Chenyang; Zhao, Kejian; Wang, Jiachen; Zhang, Hui

    2018-01-01

    The rapid development of information technology paved the way for the recording of fine-grained data, such as stroke techniques and stroke placements, during a table tennis match. This data recording creates opportunities to analyze and evaluate matches from new perspectives. Nevertheless, the increasingly complex data poses a significant challenge to make sense of and gain insights into. Analysts usually employ tedious and cumbersome methods which are limited to watching videos and reading statistical tables. However, existing sports visualization methods cannot be applied to visualizing table tennis competitions due to different competition rules and particular data attributes. In this work, we collaborate with data analysts to understand and characterize the sophisticated domain problem of analysis of table tennis data. We propose iTTVis, a novel interactive table tennis visualization system, which to our knowledge, is the first visual analysis system for analyzing and exploring table tennis data. iTTVis provides a holistic visualization of an entire match from three main perspectives, namely, time-oriented, statistical, and tactical analyses. The proposed system with several well-coordinated views not only supports correlation identification through statistics and pattern detection of tactics with a score timeline but also allows cross analysis to gain insights. Data analysts have obtained several new insights by using iTTVis. The effectiveness and usability of the proposed system are demonstrated with four case studies.

  17. A solution to the online guidance problem for targeted reaches: proportional rate control using relative disparity tau.

    PubMed

    Anderson, Joe; Bingham, Geoffrey P

    2010-09-01

    We provide a solution to a major problem in visually guided reaching. Research has shown that binocular vision plays an important role in the online visual guidance of reaching, but the visual information and strategy used to guide a reach remains unknown. We propose a new theory of visual guidance of reaching including a new information variable, tau(alpha) (relative disparity tau), and a novel control strategy that allows actors to guide their reach trajectories visually by maintaining a constant proportion between tau(alpha) and its rate of change. The dynamical model couples the information to the reaching movement to generate trajectories characteristic of human reaching. We tested the theory in two experiments in which participants reached under conditions of darkness to guide a visible point either on a sliding apparatus or on their finger to a point-light target in depth. Slider apparatus controlled for a simple mapping from visual to proprioceptive space. When reaching with their finger, participants were forced, by perturbation of visual information used for feedforward control, to use online control with only binocular disparity-based information for guidance. Statistical analyses of trajectories strongly supported the theory. Simulations of the model were compared statistically to actual reaching trajectories. The results supported the theory, showing that tau(alpha) provides a source of information for the control of visually guided reaching and that participants use this information in a proportional rate control strategy.

  18. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  19. Availability of point-of-purchase nutrition information at a fast-food restaurant.

    PubMed

    Wootan, Margo G; Osborn, Melissa; Malloy, Claudia J

    2006-12-01

    Given the link between eating out, poor diets, and obesity, we assessed the availability of point-of-purchase nutrition information at the largest fast-food restaurant in the U.S., McDonald's. In August 2004, we visited 29 of 33 (88%) of the McDonald's outlets in Washington, DC and visually inspected the premises, as well as asked cashiers or restaurant managers whether they had nutrition information available in the restaurant. In Washington, DC, 59% of McDonald's outlets provided in-store nutrition information for the majority of their standard menu items. In 62% of the restaurants, it was necessary to ask two or more employees in order to obtain a copy of that information. We found that even at the largest chain restaurant in the country, nutrition information at the point of decision-making is often difficult to find or completely absent.

  20. Early visual analysis tool using magnetoencephalography for treatment and recovery of neuronal dysfunction.

    PubMed

    Rasheed, Waqas; Neoh, Yee Yik; Bin Hamid, Nor Hisham; Reza, Faruque; Idris, Zamzuri; Tang, Tong Boon

    2017-10-01

    Functional neuroimaging modalities play an important role in deciding the diagnosis and course of treatment of neuronal dysfunction and degeneration. This article presents an analytical tool with visualization by exploiting the strengths of the MEG (magnetoencephalographic) neuroimaging technique. The tool automates MEG data import (in tSSS format), channel information extraction, time/frequency decomposition, and circular graph visualization (connectogram) for simple result inspection. For advanced users, the tool also provides magnitude squared coherence (MSC) values allowing personalized threshold levels, and the computation of default model from MEG data of control population. Default model obtained from healthy population data serves as a useful benchmark to diagnose and monitor neuronal recovery during treatment. The proposed tool further provides optional labels with international 10-10 system nomenclature in order to facilitate comparison studies with EEG (electroencephalography) sensor space. Potential applications in epilepsy and traumatic brain injury studies are also discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Does visual short-term memory have a high-capacity stage?

    PubMed

    Matsukura, Michi; Hollingworth, Andrew

    2011-12-01

    Visual short-term memory (VSTM) has long been considered a durable, limited-capacity system for the brief retention of visual information. However, a recent work by Sligte et al. (Plos One 3:e1699, 2008) reported that, relatively early after the removal of a memory array, a cue allowed participants to access a fragile, high-capacity stage of VSTM that is distinct from iconic memory. In the present study, we examined whether this stage division is warranted by attempting to corroborate the existence of an early, high-capacity form of VSTM. The results of four experiments did not support Sligte et al.'s claim, since we did not obtain evidence for VSTM retention that exceeded traditional estimates of capacity. However, performance approaching that observed in Sligte et al. can be achieved through extensive practice, providing a clear explanation for their findings. Our evidence favors the standard view of VSTM as a limited-capacity system that maintains a few object representations in a relatively durable form.

  2. Oscillatory network with self-organized dynamical connections for synchronization-based image segmentation.

    PubMed

    Kuzmina, Margarita; Manykin, Eduard; Surina, Irina

    2004-01-01

    An oscillatory network of columnar architecture located in 3D spatial lattice was recently designed by the authors as oscillatory model of the brain visual cortex. Single network oscillator is a relaxational neural oscillator with internal dynamics tunable by visual image characteristics - local brightness and elementary bar orientation. It is able to demonstrate either activity state (stable undamped oscillations) or "silence" (quickly damped oscillations). Self-organized nonlocal dynamical connections of oscillators depend on oscillator activity levels and orientations of cortical receptive fields. Network performance consists in transfer into a state of clusterized synchronization. At current stage grey-level image segmentation tasks are carried out by 2D oscillatory network, obtained as a limit version of the source model. Due to supplemented network coupling strength control the 2D reduced network provides synchronization-based image segmentation. New results on segmentation of brightness and texture images presented in the paper demonstrate accurate network performance and informative visualization of segmentation results, inherent in the model.

  3. A top-down manner-based DCNN architecture for semantic image segmentation.

    PubMed

    Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin

    2017-01-01

    Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.

  4. Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.

    PubMed

    Souto, David; Kerzel, Dirk

    2013-02-06

    Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.

  5. Lingual electrotactile stimulation as an alternative sensory feedback pathway for brain-computer interface applications

    NASA Astrophysics Data System (ADS)

    Wilson, J. Adam; Walton, Léo M.; Tyler, Mitch; Williams, Justin

    2012-08-01

    This article describes a new method of providing feedback during a brain-computer interface movement task using a non-invasive, high-resolution electrotactile vision substitution system. We compared the accuracy and movement times during a center-out cursor movement task, and found that the task performance with tactile feedback was comparable to visual feedback for 11 participants. These subjects were able to modulate the chosen BCI EEG features during both feedback modalities, indicating that the type of feedback chosen does not matter provided that the task information is clearly conveyed through the chosen medium. In addition, we tested a blind subject with the tactile feedback system, and found that the training time, accuracy, and movement times were indistinguishable from results obtained from subjects using visual feedback. We believe that BCI systems with alternative feedback pathways should be explored, allowing individuals with severe motor disabilities and accompanying reduced visual and sensory capabilities to effectively use a BCI.

  6. Microscope and spectacle: on the complexities of using new visual technologies to communicate about wildlife conservation.

    PubMed

    Verma, Audrey; van der Wal, René; Fischer, Anke

    2015-11-01

    Wildlife conservation-related organisations increasingly employ new visual technologies in their science communication and public engagement efforts. Here, we examine the use of such technologies for wildlife conservation campaigns. We obtained empirical data from four UK-based organisations through semi-structured interviews and participant observation. Visual technologies were used to provide the knowledge and generate the emotional responses perceived by organisations as being necessary for motivating a sense of caring about wildlife. We term these two aspects 'microscope' and 'spectacle', metaphorical concepts denoting the duality through which these technologies speak to both the cognitive and the emotional. As conservation relies on public support, organisations have to be seen to deliver information that is not only sufficiently detailed and scientifically credible but also spectacular enough to capture public interest. Our investigation showed that balancing science and entertainment is a difficult undertaking for wildlife-related organisations as there are perceived risks of contriving experiences of nature and obscuring conservation aims.

  7. 3D visualization software to analyze topological outcomes of topoisomerase reactions

    PubMed Central

    Darcy, I. K.; Scharein, R. G.; Stasiak, A.

    2008-01-01

    The action of various DNA topoisomerases frequently results in characteristic changes in DNA topology. Important information for understanding mechanistic details of action of these topoisomerases can be provided by investigating the knot types resulting from topoisomerase action on circular DNA forming a particular knot type. Depending on the topological bias of a given topoisomerase reaction, one observes different subsets of knotted products. To establish the character of topological bias, one needs to be aware of all possible topological outcomes of intersegmental passages occurring within a given knot type. However, it is not trivial to systematically enumerate topological outcomes of strand passage from a given knot type. We present here a 3D visualization software (TopoICE-X in KnotPlot) that incorporates topological analysis methods in order to visualize, for example, knots that can be obtained from a given knot by one intersegmental passage. The software has several other options for the topological analysis of mechanisms of action of various topoisomerases. PMID:18440983

  8. Modeling human comprehension of data visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie

    This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less

  9. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  10. An Empirical Comparison of Visualization Tools To Assist Information Retrieval on the Web.

    ERIC Educational Resources Information Center

    Heo, Misook; Hirtle, Stephen C.

    2001-01-01

    Discusses problems with navigation in hypertext systems, including cognitive overload, and describes a study that tested information visualization techniques to see which best represented the underlying structure of Web space. Considers the effects of visualization techniques on user performance on information searching tasks and the effects of…

  11. Perception of Elementary Students of Visuals on the Web.

    ERIC Educational Resources Information Center

    El-Tigi, Manal A.; And Others

    The way information is visually designed and synthesized greatly affects how people understand and use that information. Increased use of the World Wide Web as a teaching tool makes it imperative to question how visual/verbal information presented via the Web can increase or restrict understanding. The purpose of this study was to examine…

  12. On the assessment of visual communication by information theory

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1993-01-01

    This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

  13. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.

    PubMed

    Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han

    2017-09-07

    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  14. Early Auditory Evoked Potential Is Modulated by Selective Attention and Related to Individual Differences in Visual Working Memory Capacity

    PubMed Central

    Giuliano, Ryan J.; Karns, Christina M.; Neville, Helen J.; Hillyard, Steven A.

    2015-01-01

    A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual’s capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70–90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals. PMID:25000526

  15. Overview: The Design, Adoption, and Analysis of a Visual Document Mining Tool for Investigative Journalists.

    PubMed

    Brehmer, Matthew; Ingram, Stephen; Stray, Jonathan; Munzner, Tamara

    2014-12-01

    For an investigative journalist, a large collection of documents obtained from a Freedom of Information Act request or a leak is both a blessing and a curse: such material may contain multiple newsworthy stories, but it can be difficult and time consuming to find relevant documents. Standard text search is useful, but even if the search target is known it may not be possible to formulate an effective query. In addition, summarization is an important non-search task. We present Overview, an application for the systematic analysis of large document collections based on document clustering, visualization, and tagging. This work contributes to the small set of design studies which evaluate a visualization system "in the wild", and we report on six case studies where Overview was voluntarily used by self-initiated journalists to produce published stories. We find that the frequently-used language of "exploring" a document collection is both too vague and too narrow to capture how journalists actually used our application. Our iterative process, including multiple rounds of deployment and observations of real world usage, led to a much more specific characterization of tasks. We analyze and justify the visual encoding and interaction techniques used in Overview's design with respect to our final task abstractions, and propose generalizable lessons for visualization design methodology.

  16. The Chinese American Eye Study: Design and Methods

    PubMed Central

    Varma, Rohit; Hsu, Chunyi; Wang, Dandan; Torres, Mina; Azen, Stanley P.

    2016-01-01

    Purpose To summarize the study design, operational strategies and procedures of the Chinese American Eye Study (CHES), a population-based assessment of the prevalence of visual impairment, ocular disease, and visual functioning in Chinese Americans. Methods This population-based, cross-sectional study, included 4,570 Chinese, 50 years and older, residing in the city of Monterey Park, California. Each eligible participant completed a detailed interview and eye examination. The interview included an assessment of demographic, behavioral, and ocular risk factors and health-related and vision-related quality of life. The eye examination included measurements of visual acuity, intraocular pressure, visual fields, fundus and optic disc photography, a detailed anterior and posterior segment examination, and measurements of blood pressure, glycosylated hemoglobin levels, and blood glucose levels. Results The objectives of the CHES are to obtain prevalence estimates of visual impairment, refractive error, diabetic retinopathy, open-angle and angle-closure glaucoma, lens opacities, and age-related macular degeneration in Chinese-Americans. In addition, outcomes include effect estimates for risk factors associated with eye diseases. Lastly, CHES will investigate the genetic determinates of myopia and glaucoma. Conclusion The CHES will provide information about the prevalence and risk factors of ocular diseases in one of the fastest growing minority groups in the United States. PMID:24044409

  17. Examining competing hypotheses for the effects of diagrams on recall for text.

    PubMed

    Ortegren, Francesca R; Serra, Michael J; England, Benjamin D

    2015-01-01

    Supplementing text-based learning materials with diagrams typically increases students' free recall and cued recall of the presented information. In the present experiments, we examined competing hypotheses for why this occurs. More specifically, although diagrams are visual, they also serve to repeat information from the text they accompany. Both visual presentation and repetition are known to aid students' recall of information. To examine to what extent diagrams aid recall because they are visual or repetitive (or both), we had college students in two experiments (n = 320) read a science text about how lightning storms develop before completing free-recall and cued-recall tests over the presented information. Between groups, we manipulated the format and repetition of target pieces of information in the study materials using a 2 (visual presentation of target information: diagrams present vs. diagrams absent) × 2 (repetition of target information: present vs. absent) between-participants factorial design. Repetition increased both the free recall and cued recall of target information, and this occurred regardless of whether that repetition was in the form of text or a diagram. In contrast, the visual presentation of information never aided free recall. Furthermore, visual presentation alone did not significantly aid cued recall when participants studied the materials once before the test (Experiment 1) but did when they studied the materials twice (Experiment 2). Taken together, the results of the present experiments demonstrate the important role of repetition (i.e., that diagrams repeat information from the text) over the visual nature of diagrams in producing the benefits of diagrams for recall.

  18. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  19. Preprocessing of emotional visual information in the human piriform cortex.

    PubMed

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  20. aGEM: an integrative system for analyzing spatial-temporal gene-expression information

    PubMed Central

    Jiménez-Lozano, Natalia; Segura, Joan; Macías, José Ramón; Vega, Juanjo; Carazo, José María

    2009-01-01

    Motivation: The work presented here describes the ‘anatomical Gene-Expression Mapping (aGEM)’ Platform, a development conceived to integrate phenotypic information with the spatial and temporal distributions of genes expressed in the mouse. The aGEM Platform has been built by extending the Distributed Annotation System (DAS) protocol, which was originally designed to share genome annotations over the WWW. DAS is a client-server system in which a single client integrates information from multiple distributed servers. Results: The aGEM Platform provides information to answer three main questions. (i) Which genes are expressed in a given mouse anatomical component? (ii) In which mouse anatomical structures are a given gene or set of genes expressed? And (iii) is there any correlation among these findings? Currently, this Platform includes several well-known mouse resources (EMAGE, GXD and GENSAT), hosting gene-expression data mostly obtained from in situ techniques together with a broad set of image-derived annotations. Availability: The Platform is optimized for Firefox 3.0 and it is accessed through a friendly and intuitive display: http://agem.cnb.csic.es Contact: natalia@cnb.csic.es Supplementary information: Supplementary data are available at http://bioweb.cnb.csic.es/VisualOmics/aGEM/home.html and http://bioweb.cnb.csic.es/VisualOmics/index_VO.html and Bioinformatics online. PMID:19592395

  1. How Animals Understand the Meaning of Indefinite Information from Environments?

    NASA Astrophysics Data System (ADS)

    Shimizu, H.; Yamaguchi, Y.

    Animals, including human beings, have ability to understand the meaning of indefinite information from environments. Thanks to this ability the animals have flexibility in their behaviors for the environmental changes. Staring from a hypothesis that understanding of the input (Shannonian) information is based on the self-organization of a neuronal representation, that is, a spatio-temporal pattern constituted of coherent activities of neurons encoding a ``figure'', being separated from the ``background'' encoded by incoherent activities, the conditions necessary for the understanding of indefinite information were discussed. The crucial conditions revealed are that the neuronal system is incomplete or indefinite in a sense that its rules for the self-organization of the neuronal activities are completed only after the input of the environmental information and that it has an additional system named "self-specific to relevantly self-organize dynamical ``constraints'' or ``boundary conditions'' for the self-organization of the representation. For the simultaneous self-organizations of the relevant constraints and the representation, a global circulation of activities must be self-organized between these two kinds of neuronal systems. Moreover, for the performance of these functions, a specific kind of synergetic elements, ``holon elements'', are also necessary. By means of a neuronal model, the visual perception of indefinite input signals is demonstrated. The results obtained are consistent with those recently observed in the visual cortex of cats.

  2. Establishment and evolution of the Australian Inherited Retinal Disease Register and DNA Bank.

    PubMed

    De Roach, John N; McLaren, Terri L; Paterson, Rachel L; O'Brien, Emily C; Hoffmann, Ling; Mackey, David A; Hewitt, Alex W; Lamey, Tina M

    2013-07-01

    Inherited retinal disease represents a significant cause of blindness and visual morbidity worldwide. With the development of emerging molecular technologies, accessible and well-governed repositories of data characterising inherited retinal disease patients is becoming increasingly important. This manuscript introduces such a repository. Participants were recruited from the Retina Australia membership, through the Royal Australian and New Zealand College of Ophthalmologists, and by recruitment of suitable patients attending the Sir Charles Gairdner Hospital visual electrophysiology clinic. Four thousand one hundred ninety-three participants were recruited. All participants were members of families in which the proband was diagnosed with an inherited retinal disease (excluding age-related macular degeneration). Clinical and family information was collected by interview with the participant and by examination of medical records. In 2001, we began collecting DNA from Western Australian participants. In 2009 this activity was extended Australia-wide. Genetic analysis results were stored in the register as they were obtained. The main outcome measurement was the number of DNA samples (with associated phenotypic information) collected from Australian inherited retinal disease-affected families. DNA was obtained from 2873 participants. Retinitis pigmentosa, Stargardt disease and Usher syndrome participants comprised 61.0%, 9.9% and 6.4% of the register, respectively. This resource is a valuable tool for investigating the aetiology of inherited retinal diseases. As new molecular technologies are translated into clinical applications, this well-governed repository of clinical and genetic information will become increasingly relevant for tasks such as identifying candidates for gene-specific clinical trials. © 2012 The Authors. Clinical and Experimental Ophthalmology © 2012 Royal Australian and New Zealand College of Ophthalmologists.

  3. Processing Of Visual Information In Primate Brains

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H.; Van Essen, David C.

    1991-01-01

    Report reviews and analyzes information-processing strategies and pathways in primate retina and visual cortex. Of interest both in biological fields and in such related computational fields as artificial neural networks. Focuses on data from macaque, which has superb visual system similar to that of humans. Authors stress concept of "good engineering" in understanding visual system.

  4. Enhanced Local Processing of Dynamic Visual Information in Autism: Evidence from Speed Discrimination

    ERIC Educational Resources Information Center

    Chen, Y.; Norton, D. J.; McBain, R.; Gold, J.; Frazier, J. A.; Coyle, J. T.

    2012-01-01

    An important issue for understanding visual perception in autism concerns whether individuals with this neurodevelopmental disorder possess an advantage in processing local visual information, and if so, what is the nature of this advantage. Perception of movement speed is a visual process that relies on computation of local spatiotemporal signals…

  5. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 4 : web-based bridge information database--visualization analytics and distributed sensing.

    DOT National Transportation Integrated Search

    2012-03-01

    This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...

  6. Visualizing a High Recall Search Strategy Output for Undergraduates in an Exploration Stage of Researching a Term Paper.

    ERIC Educational Resources Information Center

    Cole, Charles; Mandelblatt, Bertie; Stevenson, John

    2002-01-01

    Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…

  7. Acoustic Signature from Flames as a Combustion Diagnostic Tool

    DTIC Science & Technology

    1983-11-01

    empirical visual flame length had to be input to the computer for the inversion method to give good results. That is, if the experiment cnd inversion...method were asked to yield the flame length , poor results were obtained. Since this wa3 part of the information sought for practical application of the...to small experimental uncertainty. The method gave reasonably good results for the open flame but substantial input (the flame length ) had to be

  8. Thalamic nuclei convey diverse contextual information to layer 1 of visual cortex

    PubMed Central

    Imhof, Fabia; Martini, Francisco J.; Hofer, Sonja B.

    2017-01-01

    Sensory perception depends on the context within which a stimulus occurs. Prevailing models emphasize cortical feedback as the source of contextual modulation. However, higher-order thalamic nuclei, such as the pulvinar, interconnect with many cortical and subcortical areas, suggesting a role for the thalamus in providing sensory and behavioral context – yet the nature of the signals conveyed to cortex by higher-order thalamus remains poorly understood. Here we use axonal calcium imaging to measure information provided to visual cortex by the pulvinar equivalent in mice, the lateral posterior nucleus (LP), as well as the dorsolateral geniculate nucleus (dLGN). We found that dLGN conveys retinotopically precise visual signals, while LP provides distributed information from the visual scene. Both LP and dLGN projections carry locomotion signals. However, while dLGN inputs often respond to positive combinations of running and visual flow speed, LP signals discrepancies between self-generated and external visual motion. This higher-order thalamic nucleus therefore conveys diverse contextual signals that inform visual cortex about visual scene changes not predicted by the animal’s own actions. PMID:26691828

  9. Vision and quality-of-life.

    PubMed Central

    Brown, G C

    1999-01-01

    OBJECTIVE: To determine the relationship of visual acuity loss to quality of life. DESIGN: Three hundred twenty-five patients with visual loss to a minimum of 20/40 or greater in at least 1 eye were interviewed in a standardized fashion using a modified VF-14, questionnaire. Utility values were also obtained using both the time trade-off and standard gamble methods of utility assessment. MAIN OUTCOME MEASURES: Best-corrected visual acuity was correlated with the visual function score on the modified VF-14 questionnaire, as well as with utility values obtained using both the time trade-off and standard gamble methods. RESULTS: Decreasing levels of vision in the eye with better acuity correlated directly with decreasing visual function scores on the modified VF-14 questionnaire, as did decreasing utility values using the time trade-off method of utility evaluation. The standard gamble method of utility evaluation was not as directly correlated with vision as the time trade-off method. Age, level of education, gender, race, length of time of visual loss, and the number of associated systemic comorbidities did not significantly affect the time trade-off utility values associated with visual loss in the better eye. The level of reduced vision in the better eye, rather than the specific disease process causing reduced vision, was related to mean utility values. The average person with 20/40 vision in the better seeing eye was willing to trade 2 of every 10 years of life in return for perfect vision (utility value of 0.8), while the average person with counting fingers vision in the better eye was willing to trade approximately 5 of every 10 remaining years of life (utility value of 0.52) in return for perfect vision. CONCLUSIONS: The time trade-off method of utility evaluation appears to be an effective method for assessing quality of life associated with visual loss. Time trade-off utility values decrease in direct conjunction with decreasing vision in the better-seeing eye. Unlike the modified VF-14 test and its counterparts, utility values allow the quality of life associated with visual loss to be more readily compared to the quality of life associated with other health (disease) states. This information can be employed for cost-effective analyses that objectively compare evidence-based medicine, patient-based preferences and sound econometric principles across all specialties in health care. PMID:10703139

  10. Innovative intelligent technology of distance learning for visually impaired people

    NASA Astrophysics Data System (ADS)

    Samigulina, Galina; Shayakhmetova, Assem; Nuysuppov, Adlet

    2017-12-01

    The aim of the study is to develop innovative intelligent technology and information systems of distance education for people with impaired vision (PIV). To solve this problem a comprehensive approach has been proposed, which consists in the aggregate of the application of artificial intelligence methods and statistical analysis. Creating an accessible learning environment, identifying the intellectual, physiological, psychophysiological characteristics of perception and information awareness by this category of people is based on cognitive approach. On the basis of fuzzy logic the individually-oriented learning path of PIV is con- structed with the aim of obtaining high-quality engineering education with modern equipment in the joint use laboratories.

  11. Approximating high angular resolution apparent diffusion coefficient profiles using spherical harmonics under BiGaussian assumption

    NASA Astrophysics Data System (ADS)

    Cao, Ning; Liang, Xuwei; Zhuang, Qi; Zhang, Jun

    2009-02-01

    Magnetic Resonance Imaging (MRI) techniques have achieved much importance in providing visual and quantitative information of human body. Diffusion MRI is the only non-invasive tool to obtain information of the neural fiber networks of the human brain. The traditional Diffusion Tensor Imaging (DTI) is only capable of characterizing Gaussian diffusion. High Angular Resolution Diffusion Imaging (HARDI) extends its ability to model more complex diffusion processes. Spherical harmonic series truncated to a certain degree is used in recent studies to describe the measured non-Gaussian Apparent Diffusion Coefficient (ADC) profile. In this study, we use the sampling theorem on band-limited spherical harmonics to choose a suitable degree to truncate the spherical harmonic series in the sense of Signal-to-Noise Ratio (SNR), and use Monte Carlo integration to compute the spherical harmonic transform of human brain data obtained from icosahedral schema.

  12. From Phonomecanocardiography to Phonocardiography computer aided

    NASA Astrophysics Data System (ADS)

    Granados, J.; Tavera, F.; López, G.; Velázquez, J. M.; Hernández, R. T.; López, G. A.

    2017-01-01

    Due to lack of training doctors to identify many of the disorders in the heart by conventional listening, it is necessary to add an objective and methodological analysis to support this technique. In order to obtain information of the performance of the heart to be able to diagnose heart disease through a simple, cost-effective procedure by means of a data acquisition system, we have obtained Phonocardiograms (PCG), which are images of the sounds emitted by the heart. A program of acoustic, visual and artificial vision recognition was elaborated to interpret them. Based on the results of previous research of cardiologists a code of interpretation of PCG and associated diseases was elaborated. Also a site, within the university campus, of experimental sampling of cardiac data was created. Phonocardiography computer-aided is a viable and low cost procedure which provides additional medical information to make a diagnosis of complex heart diseases. We show some previous results.

  13. Querying and Extracting Timeline Information from Road Traffic Sensor Data

    PubMed Central

    Imawan, Ardi; Indikawati, Fitri Indra; Kwon, Joonho; Rao, Praveen

    2016-01-01

    The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS) centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information) system—a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index) that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset. PMID:27563900

  14. Visual information mining in remote sensing image archives

    NASA Astrophysics Data System (ADS)

    Pelizzari, Andrea; Descargues, Vincent; Datcu, Mihai P.

    2002-01-01

    The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.

  15. Advocating for a Population-Specific Health Literacy for People With Visual Impairments.

    PubMed

    Harrison, Tracie; Lazard, Allison

    2015-01-01

    Health literacy, the ability to access, process, and understand health information, is enhanced by the visual senses among people who are typically sighted. Emotions, meaning, speed of knowledge transfer, level of attention, and degree of relevance are all manipulated by the visual design of health information when people can see. When consumers of health information are blind or visually impaired, they access, process, and understand their health information in a multitude of methods using a variety of accommodations depending upon their severity and type of impairment. They are taught, or they learn how, to accommodate their differences by using alternative sensory experiences and interpretations. In this article, we argue that due to the unique and powerful aspects of visual learning and due to the differences in knowledge creation when people are not visually oriented, health literacy must be considered a unique construct for people with visual impairment, which requires a distinctive theoretical basis for determining the impact of their mind-constructed representations of health.

  16. Real-time scalable visual analysis on mobile devices

    NASA Astrophysics Data System (ADS)

    Pattath, Avin; Ebert, David S.; May, Richard A.; Collins, Timothy F.; Pike, William

    2008-02-01

    Interactive visual presentation of information can help an analyst gain faster and better insight from data. When combined with situational or context information, visualization on mobile devices is invaluable to in-field responders and investigators. However, several challenges are posed by the form-factor of mobile devices in developing such systems. In this paper, we classify these challenges into two broad categories - issues in general mobile computing and issues specific to visual analysis on mobile devices. Using NetworkVis and Infostar as example systems, we illustrate some of the techniques that we employed to overcome many of the identified challenges. NetworkVis is an OpenVG-based real-time network monitoring and visualization system developed for Windows Mobile devices. Infostar is a flash-based interactive, real-time visualization application intended to provide attendees access to conference information. Linked time-synchronous visualization, stylus/button-based interactivity, vector graphics, overview-context techniques, details-on-demand and statistical information display are some of the highlights of these applications.

  17. Defining the cortical visual systems: "what", "where", and "how"

    NASA Technical Reports Server (NTRS)

    Creem, S. H.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)

    2001-01-01

    The visual system historically has been defined as consisting of at least two broad subsystems subserving object and spatial vision. These visual processing streams have been organized both structurally as two distinct pathways in the brain, and functionally for the types of tasks that they mediate. The classic definition by Ungerleider and Mishkin labeled a ventral "what" stream to process object information and a dorsal "where" stream to process spatial information. More recently, Goodale and Milner redefined the two visual systems with a focus on the different ways in which visual information is transformed for different goals. They relabeled the dorsal stream as a "how" system for transforming visual information using an egocentric frame of reference in preparation for direct action. This paper reviews recent research from psychophysics, neurophysiology, neuropsychology and neuroimaging to define the roles of the ventral and dorsal visual processing streams. We discuss a possible solution that allows for both "where" and "how" systems that are functionally and structurally organized within the posterior parietal lobe.

  18. Four types of ensemble coding in data visualizations.

    PubMed

    Szafir, Danielle Albers; Haroz, Steve; Gleicher, Michael; Franconeri, Steven

    2016-01-01

    Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research.

  19. Brain processing of visual information during fast eye movements maintains motor performance.

    PubMed

    Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis

    2013-01-01

    Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.

  20. Methods study for the relocation of visual information in central scotoma cases

    NASA Astrophysics Data System (ADS)

    Scherlen, Anne-Catherine; Gautier, Vincent

    2005-03-01

    In this study we test the benefit on the reading performance of different ways to relocating the visual information present under the scotoma. The relocation (or unmasking) allows to compensate the loss of information and avoid the patient developing driving strategies not adapted for the reading. Eight healthy subjects were tested on a reading task, on each a central scotoma of various sizes was simulated. We then evaluate the reading speed (words/min) during three visual information relocation methods: all masked information is relocated - on both side of scotoma, - on the right of scotoma, - and only essentials letters for the word recognition too on the right of scotoma. We compare these reading speeds versus the pathological condition, ie without relocating visual information. Our results show that unmasking strategy improve the reading speed when all the visual information is unmask to the right of scotoma, this only for large scotoma. Taking account the word morphology, the perception of only certain letters outside the scotoma can be sufficient to improve the reading speed. A deepening of reading processes in the presence of a scotoma will then allows a new perspective for visual information unmasking. Multidisciplinary competences brought by engineers, ophtalmologists, linguists, clinicians would allow to optimize the reading benefit brought by the unmasking.

  1. Differential effects of non-informative vision and visual interference on haptic spatial processing

    PubMed Central

    van Rheede, Joram J.; Postma, Albert; Kappers, Astrid M. L.

    2008-01-01

    The primary purpose of this study was to examine the effects of non-informative vision and visual interference upon haptic spatial processing, which supposedly derives from an interaction between an allocentric and egocentric reference frame. To this end, a haptic parallelity task served as baseline to determine the participant-dependent biasing influence of the egocentric reference frame. As expected, large systematic participant-dependent deviations from veridicality were observed. In the second experiment we probed the effect of non-informative vision on the egocentric bias. Moreover, orienting mechanisms (gazing directions) were studied with respect to the presentation of haptic information in a specific hemispace. Non-informative vision proved to have a beneficial effect on haptic spatial processing. No effect of gazing direction or hemispace was observed. In the third experiment we investigated the effect of simultaneously presented interfering visual information on the haptic bias. Interfering visual information parametrically influenced haptic performance. The interplay of reference frames that subserves haptic spatial processing was found to be related to both the effects of non-informative vision and visual interference. These results suggest that spatial representations are influenced by direct cross-modal interactions; inter-participant differences in the haptic modality resulted in differential effects of the visual modality. PMID:18553074

  2. Resources for Designing, Selecting and Teaching with Visualizations in the Geoscience Classroom

    NASA Astrophysics Data System (ADS)

    Kirk, K. B.; Manduca, C. A.; Ormand, C. J.; McDaris, J. R.

    2009-12-01

    Geoscience is a highly visual field, and effective use of visualizations can enhance student learning, appeal to students’ emotions and help them acquire skills for interpreting visual information. The On the Cutting Edge website, “Teaching Geoscience with Visualizations” presents information of interest to faculty who are teaching with visualizations, as well as those who are designing visualizations. The website contains best practices for effective visualizations, drawn from the educational literature and from experts in the field. For example, a case is made for careful selection of visualizations so that faculty can align the correct visualization with their teaching goals and audience level. Appropriate visualizations will contain the desired geoscience content without adding extraneous information that may distract or confuse students. Features such as labels, arrows and contextual information can help guide students through imagery and help to explain the relevant concepts. Because students learn by constructing their own mental image of processes, it is helpful to select visualizations that reflect the same type of mental picture that students should create. A host of recommended readings and presentations from the On the Cutting Edge visualization workshops can provide further grounding for the educational uses of visualizations. Several different collections of visualizations, datasets with visualizations and visualization tools are available on the website. Examples include animations of tsunamis, El Nino conditions, braided stream formation and mountain uplift. These collections are grouped by topic and range from simple animations to interactive models. A series of example activities that incorporate visualizations into classroom and laboratory activities illustrate various tactics for using these materials in different types of settings. Activities cover topics such as ocean circulation, land use changes, earthquake simulations and the use of Google Earth to explore geologic processes. These materials can be found at http://serc.carleton.edu/NAGTWorkshops/visualization. Faculty and developers of visualization tools are encouraged to submit teaching activities, references or visualizations to the collections.

  3. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study.

    PubMed

    Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek

    2018-04-26

    Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.

  4. 3D Simulation of External Flooding Events for the RISMC Pathway

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prescott, Steven; Mandelli, Diego; Sampath, Ramprasad

    2015-09-01

    Incorporating 3D simulations as part of the Risk-Informed Safety Margins Characterization (RISMIC) Toolkit allows analysts to obtain a more complete picture of complex system behavior for events including external plant hazards. External events such as flooding have become more important recently – however these can be analyzed with existing and validated simulated physics toolkits. In this report, we describe these approaches specific to flooding-based analysis using an approach called Smoothed Particle Hydrodynamics. The theory, validation, and example applications of the 3D flooding simulation are described. Integrating these 3D simulation methods into computational risk analysis provides a spatial/visual aspect to themore » design, improves the realism of results, and can prove visual understanding to validate the analysis of flooding.« less

  5. A far-field-viewing sensor for making analytical measurements in remote locations.

    PubMed

    Michael, K L; Taylor, L C; Walt, D R

    1999-07-15

    We demonstrate a far-field-viewing GRINscope sensor for making analytical measurements in remote locations. The GRINscope was fabricated by permanently affixing a micro-Gradient index (GRIN) lens on the distal face of a 350-micron-diameter optical imaging fiber. The GRINscope can obtain both chemical and visual information. In one application, a thin, pH-sensitive polymer layer was immobilized on the distal end of the GRINscope. The ability of the GRINscope to visually image its far-field surroundings and concurrently detect pH changes in a flowing stream was demonstrated. In a different application, the GRINscope was used to image pH- and O2-sensitive particles on a remote substrate and simultaneously measure their fluorescence intensity in response to pH or pO2 changes.

  6. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    PubMed

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Disentangling brain activity related to the processing of emotional visual information and emotional arousal.

    PubMed

    Kuniecki, Michał; Wołoszyn, Kinga; Domagalik, Aleksandra; Pilarczyk, Joanna

    2018-05-01

    Processing of emotional visual information engages cognitive functions and induces arousal. We aimed to examine the modulatory role of emotional valence on brain activations linked to the processing of visual information and those linked to arousal. Participants were scanned and their pupil size was measured while viewing negative and neutral images. The visual noise was added to the images in various proportions to parametrically manipulate the amount of visual information. Pupil size was used as an index of physiological arousal. We show that arousal induced by the negative images, as compared to the neutral ones, is primarily related to greater amygdala activity while increasing visibility of negative content to enhanced activity in the lateral occipital complex (LOC). We argue that more intense visual processing of negative scenes can occur irrespective of the level of arousal. It may suggest that higher areas of the visual stream are fine-tuned to process emotionally relevant objects. Both arousal and processing of emotional visual information modulated activity within the ventromedial prefrontal cortex (vmPFC). Overlapping activations within the vmPFC may reflect the integration of these aspects of emotional processing. Additionally, we show that emotionally-evoked pupil dilations are related to activations in the amygdala, vmPFC, and LOC.

  8. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear

    PubMed Central

    Clevis, Krien; Hagoort, Peter

    2011-01-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540

  9. Infants' prospective control during object manipulation in an uncertain environment.

    PubMed

    Gottwald, Janna M; Gredebäck, Gustaf

    2015-08-01

    This study investigates how infants use visual and sensorimotor information to prospectively control their actions. We gave 14-month-olds two objects of different weight and observed how high they were lifted, using a Qualisys Motion Capture System. In one condition, the two objects were visually distinct (different color condition) in another they were visually identical (same color condition). Lifting amplitudes of the first movement unit were analyzed in order to assess prospective control. Results demonstrate that infants lifted a light object higher than a heavy object, especially when vision could be used to assess weight (different color condition). When being confronted with two visually identical objects of different weight (same color condition), infants showed a different lifting pattern than what could be observed in the different color condition, expressed by a significant interaction effect between object weight and color condition on lifting amplitude. These results indicate that (a) visual information about object weight can be used to prospectively control lifting actions and that (b) infants are able to prospectively control their lifting actions even without visual information about object weight. We argue that infants, in the absence of reliable visual information about object weight, heighten their dependence on non-visual information (tactile, sensorimotor memory) in order to estimate weight and pre-adjust their lifting actions in a prospective manner.

  10. False memories and lexical decision: even twelve primes do not cause long-term semantic priming.

    PubMed

    Zeelenberg, René; Pecher, Diane

    2002-03-01

    Semantic priming effects are usually obtained only if the prime is presented shortly before the target stimulus. Recent evidence obtained with the so-called false memory paradigm suggests, however, that in both explicit and implicit memory tasks semantic relations between words can result in long-lasting effects when multiple 'primes' are presented. The aim of the present study was to investigate whether these effects would generalize to lexical decision. In four experiments we showed that even as many as 12 primes do not cause long-term semantic priming. In all experiments, however, a repetition priming effect was obtained. The present results are consistent with a number of other results showing that semantic information plays a minimal role in long-term priming in visual word recognition.

  11. The vision guidance and image processing of AGV

    NASA Astrophysics Data System (ADS)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  12. The Visual Uncertainty Paradigm for Controlling Screen-Space Information in Visualization

    ERIC Educational Resources Information Center

    Dasgupta, Aritra

    2012-01-01

    The information visualization pipeline serves as a lossy communication channel for presentation of data on a screen-space of limited resolution. The lossy communication is not just a machine-only phenomenon due to information loss caused by translation of data, but also a reflection of the degree to which the human user can comprehend visual…

  13. Alcohol consumption and visual impairment in a rural Northern Chinese population.

    PubMed

    Li, Zhijian; Xu, Keke; Wu, Shubin; Sun, Ying; Song, Zhen; Jin, Di; Liu, Ping

    2014-12-01

    To investigate alcohol drinking status and the association between drinking patterns and visual impairment in an adult population in northern China. Cluster sampling was used to select samples. The protocol consisted of an interview, pilot study, visual acuity (VA) testing and a clinical examination. Visual impairment was defined as presenting VA worse than 20/60 in any eye. Drinking patterns included drinking quantity (standard drinks per week) and frequency (drinking days in the past week). Information on alcohol consumption was obtained from 8445 subjects, 963 (11.4%) of whom reported consuming alcohol. In multivariate analysis, alcohol consumption was significantly associated with older age (p < 0.001), male sex (p < 0.001), and higher education level (p < 0.01). Heavy intake (>14 drinks/week) was associated with higher odds of visual impairment. However, moderate intake (>1-14 drinks/week) was significantly associated with lower odds (adjusted odds ratio, OR, 0.7, 95% confidence interval, CI, 0.5-1.0) of visual impairment (p = 0.03). Higher drinking frequency was significantly associated with higher odds of visual impairment. Multivariate analysis showed that older age, male sex, and higher education level were associated with visual impairment among current drinkers. Age- and sex-adjusted ORs for the association of cataract and alcohol intake showed that higher alcohol consumption was not significantly associated with an increased prevalence of cataract (OR 1.2, 95% CI 0.4-3.6), whereas light and moderate alcohol consumption appeared to reduce incidence of cataract. Drinking patterns were associated with visual impairment. Heavy intake had negative effects on distance vision; meanwhile, moderate intake had a positive effect on distance vision.

  14. Mapping and characterization of positive and negative BOLD responses to visual stimulation in multiple brain regions at 7T.

    PubMed

    Jorge, João; Figueiredo, Patrícia; Gruetter, Rolf; van der Zwaag, Wietske

    2018-06-01

    External stimuli and tasks often elicit negative BOLD responses in various brain regions, and growing experimental evidence supports that these phenomena are functionally meaningful. In this work, the high sensitivity available at 7T was explored to map and characterize both positive (PBRs) and negative BOLD responses (NBRs) to visual checkerboard stimulation, occurring in various brain regions within and beyond the visual cortex. Recently-proposed accelerated fMRI techniques were employed for data acquisition, and procedures for exclusion of large draining vein contributions, together with ICA-assisted denoising, were included in the analysis to improve response estimation. Besides the visual cortex, significant PBRs were found in the lateral geniculate nucleus and superior colliculus, as well as the pre-central sulcus; in these regions, response durations increased monotonically with stimulus duration, in tight covariation with the visual PBR duration. Significant NBRs were found in the visual cortex, auditory cortex, default-mode network (DMN) and superior parietal lobule; NBR durations also tended to increase with stimulus duration, but were significantly less sustained than the visual PBR, especially for the DMN and superior parietal lobule. Responses in visual and auditory cortex were further studied for checkerboard contrast dependence, and their amplitudes were found to increase monotonically with contrast, linearly correlated with the visual PBR amplitude. Overall, these findings suggest the presence of dynamic neuronal interactions across multiple brain regions, sensitive to stimulus intensity and duration, and demonstrate the richness of information obtainable when jointly mapping positive and negative BOLD responses at a whole-brain scale, with ultra-high field fMRI. © 2018 Wiley Periodicals, Inc.

  15. Social media interruption affects the acquisition of visually, not aurally, acquired information during a pathophysiology lecture.

    PubMed

    Marone, Jane R; Thakkar, Shivam C; Suliman, Neveen; O'Neill, Shannon I; Doubleday, Alison F

    2018-06-01

    Poor academic performance from extensive social media usage appears to be due to students' inability to multitask between distractions and academic work. However, the degree to which visually distracted students can acquire lecture information presented aurally is unknown. This study examined the ability of students visually distracted by social media to acquire information presented during a voice-over PowerPoint lecture, and to compare performance on examination questions derived from information presented aurally vs. that presented visually. Students ( n = 20) listened to a 42-min cardiovascular pathophysiology lecture containing embedded cartoons while taking notes. The experimental group ( n = 10) was visually, but not aurally, distracted by social media during times when cartoon information was presented, ~40% of total lecture time. Overall performance among distracted students on a follow-up, open-note quiz was 30% poorer than that for controls ( P < 0.001). When the modality of presentation (visual vs. aural) was compared, performance decreased on examination questions from information presented visually. However, performance on questions from information presented aurally was similar to that of controls. Our findings suggest the ability to acquire information during lecture may vary, depending on the degree of competition between the modalities of the distraction and the lecture presentation. Within the context of current literature, our findings also suggest that timing of the distraction relative to delivery of material examined affects performance more than total distraction time. Therefore, when delivering lectures, instructors should incorporate organizational cues and active learning strategies that assist students in maintaining focus and acquiring relevant information.

  16. Visual performance modeling in the human operator simulator

    NASA Technical Reports Server (NTRS)

    Strieb, M. I.

    1979-01-01

    A brief description of the history of the development of the human operator simulator (HOS) model is presented. Features of the HOS micromodels that impact on the obtainment of visual performance data are discussed along with preliminary details on a HOS pilot model designed to predict the results of visual performance workload data obtained through oculometer studies on pilots in real and simulated approaches and landings.

  17. Visualization of Secondary Flow Development in High Aspect Ratio Channels with Curvature

    NASA Technical Reports Server (NTRS)

    Meyer, Michael L.; Giuliani, James E.

    1994-01-01

    The results of an experimental project to visually examine the secondary flow structure that develops in curved, high aspect-ratio rectangular channels are presented. The results provide insight into the fluid dynamics within high aspect ratio channels. A water flow test rig constructed out of plexiglass, with an adjustable aspect ratio, was used for these experiments. Results were obtained for a channel geometry with a hydraulic diameter of 10.6 mm (0.417 in.), an aspect ratio of 5.0, and a hydraulic radius to curvature radius ratio of 0.0417. Flow conditions were varied to achieve Reynolds numbers up to 5,100. A new particle imaging velocimetry technique was developed which could resolve velocity information from particles entering and leaving the field of view. Time averaged secondary flow velocity vectors, obtained using this velocimetry technique, are presented for 30 degrees, 60 degrees, and 90 degrees into a 180 degrees bend and at a Reynolds number of 5,100. The secondary flow results suggest the coexistence of both the classical curvature induced vortex pair flow structure and the eddies seen in straight turbulent channel flow.

  18. Parafoveal preview benefit in reading is only obtained from the saccade goal.

    PubMed

    McDonald, Scott A

    2006-12-01

    Previous research has demonstrated that reading is less efficient when parafoveal visual information about upcoming words is invalid or unavailable; the benefit from a valid preview is realised as reduced reading times on the subsequently foveated word, and has been explained with reference to the allocation of attentional resources to parafoveal word(s). This paper presents eyetracking evidence that preview benefit is obtained only for words that are selected as the saccade target. Using a gaze-contingent display change paradigm (Rayner, K. (1975). The perceptual span and peripheral cues in reading. Cognitive Psychology, 7, 65-81), the position of the triggering boundary was set near the middle of the pretarget word. When a refixation saccade took the eye across the boundary in the pretarget word, there was no reliable effect of the validity of the target word preview. However, when the triggering boundary was positioned just after the pretarget word, a robust preview benefit was observed, replicating previous research. The current results complement findings from studies of basic visual function, suggesting that for the case of preview benefit in reading, attentional and oculomotor processes are obligatorily coupled.

  19. Satiation or availability? Effects of attention, memory, and imagery on the perception of ambiguous figures

    NASA Technical Reports Server (NTRS)

    Horlitz, Krista L.; O'Leary, Ann

    1993-01-01

    The prolonged-inspection technique has been used to demonstrate effects of satiation on the perception of ambiguous figures. We propose that the inspection phase, in which subjects view an unambiguous version of the stimulus prior to observing the ambiguous figure, does not create neural fatigue but rather provides a context in which the alternative percept is apprehended and gains perceptual strength through processes such as imagination or memory. The consequent availability of the alternative organization drives the perceptual phenomena that have been thought to reflect satiation. In Experiment 1, we demonstrated that (1) preexperimental exposure to the target figures and (2) allocation of attention to the inspection figures were both necessary in order to obtain results similar to those predicted by the satiation model. In Experiment 2, we obtained similar results, finding that effects of prior inspection were greater the greater the amount and availability of information regarding the alternative percept during the inspection phase. Subjects who generated visual images of the noninspected alternative during inspection yielded results comparable to those from subjects to whom both versions were presented visually.

  20. Gamma/x-ray linear pushbroom stereo for 3D cargo inspection

    NASA Astrophysics Data System (ADS)

    Zhu, Zhigang; Hu, Yu-Chi

    2006-05-01

    For evaluating the contents of trucks, containers, cargo, and passenger vehicles by a non-intrusive gamma-ray or X-ray imaging system to determine the possible presence of contraband, three-dimensional (3D) measurements could provide more information than 2D measurements. In this paper, a linear pushbroom scanning model is built for such a commonly used gamma-ray or x-ray cargo inspection system. Accurate 3D measurements of the objects inside a cargo can be obtained by using two such scanning systems with different scanning angles to construct a pushbroom stereo system. A simple but robust calibration method is proposed to find the important parameters of the linear pushbroom sensors. Then, a fast and automated stereo matching algorithm based on free-form deformable registration is developed to obtain 3D measurements of the objects under inspection. A user interface is designed for 3D visualization of the objects in interests. Experimental results of sensor calibration, stereo matching, 3D measurements and visualization of a 3D cargo container and the objects inside, are presented.

  1. Perceptual color difference metric including a CSF based on the perception threshold

    NASA Astrophysics Data System (ADS)

    Rosselli, Vincent; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2008-01-01

    The study of the Human Visual System (HVS) is very interesting to quantify the quality of a picture, to predict which information will be perceived on it, to apply adapted tools ... The Contrast Sensitivity Function (CSF) is one of the major ways to integrate the HVS properties into an imaging system. It characterizes the sensitivity of the visual system to spatial and temporal frequencies and predicts the behavior for the three channels. Common constructions of the CSF have been performed by estimating the detection threshold beyond which it is possible to perceive a stimulus. In this work, we developed a novel approach for spatio-chromatic construction based on matching experiments to estimate the perception threshold. It consists in matching the contrast of a test stimulus with that of a reference one. The obtained results are quite different in comparison with the standard approaches as the chromatic CSFs have band-pass behavior and not low pass. The obtained model has been integrated in a perceptual color difference metric inspired by the s-CIELAB. The metric is then evaluated with both objective and subjective procedures.

  2. SU-G-JeP3-05: Geometry Based Transperineal Ultrasound Probe Positioning for Image Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camps, S; With, P de; Verhaegen, F

    2016-06-15

    Purpose: The use of ultrasound (US) imaging in radiotherapy is not widespread, primarily due to the need for skilled operators performing the scans. Automation of probe positioning has the potential to remove this need and minimize operator dependence. We introduce an algorithm for obtaining a US probe position that allows good anatomical structure visualization based on clinical requirements. The first application is on 4D transperineal US images of prostate cancer patients. Methods: The algorithm calculates the probe position and orientation using anatomical information provided by a reference CT scan, always available in radiotherapy workflows. As initial test, we apply themore » algorithm on a CIRS pelvic US phantom to obtain a set of possible probe positions. Subsequently, five of these positions are randomly chosen and used to acquire actual US volumes of the phantom. Visual inspection of these volumes reveal if the whole prostate, and adjacent edges of bladder and rectum are fully visualized, as clinically required. In addition, structure positions on the acquired US volumes are compared to predictions of the algorithm. Results: All acquired volumes fulfill the clinical requirements as specified in the previous section. Preliminary quantitative evaluation was performed on thirty consecutive slices of two volumes, on which the structures are easily recognizable. The mean absolute distances (MAD) between actual anatomical structure positions and positions predicted by the algorithm were calculated. This resulted in MAD of 2.4±0.4 mm for prostate, 3.2±0.9 mm for bladder and 3.3±1.3 mm for rectum. Conclusion: Visual inspection and quantitative evaluation show that the algorithm is able to propose probe positions that fulfill all clinical requirements. The obtained MAD is on average 2.9 mm. However, during evaluation we assumed no errors in structure segmentation and probe positioning. In future steps, accurate estimation of these errors will allow for better evaluation of the achieved accuracy.« less

  3. The Role of Sensory-Motor Information in Object Recognition: Evidence from Category-Specific Visual Agnosia

    ERIC Educational Resources Information Center

    Wolk, D.A.; Coslett, H.B.; Glosser, G.

    2005-01-01

    The role of sensory-motor representations in object recognition was investigated in experiments involving AD, a patient with mild visual agnosia who was impaired in the recognition of visually presented living as compared to non-living entities. AD named visually presented items for which sensory-motor information was available significantly more…

  4. Application of Frameworks in the Analysis and (Re)design of Interactive Visual Learning Tools

    ERIC Educational Resources Information Center

    Liang, Hai-Ning; Sedig, Kamran

    2009-01-01

    Interactive visual learning tools (IVLTs) are software environments that encode and display information visually and allow learners to interact with the visual information. This article examines the application and utility of frameworks in the analysis and design of IVLTs at the micro level. Frameworks play an important role in any design. They…

  5. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  6. Visual perception and imagery: a new molecular hypothesis.

    PubMed

    Bókkon, I

    2009-05-01

    Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated redox and bioluminescent reactions in the visual system during visual perception and visual imagery.

  7. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  8. The processing of images of biological threats in visual short-term memory.

    PubMed

    Quinlan, Philip T; Yue, Yue; Cohen, Dale J

    2017-08-30

    The idea that there is enhanced memory for negatively, emotionally charged pictures was examined. Performance was measured under rapid, serial visual presentation (RSVP) conditions in which, on every trial, a sequence of six photo-images was presented. Briefly after the offset of the sequence, two alternative images (a target and a foil) were presented and participants attempted to choose which image had occurred in the sequence. Images were of threatening and non-threatening cats and dogs. The target depicted either an animal expressing an emotion distinct from the other images, or the sequences contained only images depicting the same emotional valence. Enhanced memory was found for targets that differed in emotional valence from the other sequence images, compared to targets that expressed the same emotional valence. Further controls in stimulus selection were then introduced and the same emotional distinctiveness effect obtained. In ruling out possible visual and attentional accounts of the data, an informal dual route topic model is discussed. This places emphasis on how visual short-term memory reveals a sensitivity to the emotional content of the input as it unfolds over time. Items that present with a distinctive emotional content stand out in memory. © 2017 The Author(s).

  9. Visualization of Sources in the Universe

    NASA Astrophysics Data System (ADS)

    Kafatos, M.; Cebral, J. R.

    1993-12-01

    We have begun to develop a series of visualization tools of importance to the display of astronomical data and have applied these to the visualization of cosmological sources in the recently formed Institute for Computational Sciences and Informatics at GMU. One can use a three-dimensional perspective plot of the density surface for three dimensional data and in this case the iso-level contours are three- dimensional surfaces. Sophisticated rendering algorithms combined with multiple source lighting allow us to look carefully at such density contours and to see fine structure on the surface of the density contours. Stereoscopic and transparent rendering can give an even more sophisticated approach with multi-layered surfaces providing information at different levels. We have applied these methods to looking at density surfaces of 3-D data such as 100 clusters of galaxies and 2500 galaxies in the CfA redshift survey. Our plots presented are based on three variables, right ascension, declination and redshift. We have also obtained density structures in 2-D for the distribution of gamma-ray bursts (where distances are unknown) and the distribution of a variety of sources such as clusters of galaxies. Our techniques allow for correlations to be done visually.

  10. Use of images in shelf life assessment of fruit salad.

    PubMed

    Manzocco, Lara; Rumignani, Alberto; Lagazio, Corrado

    2012-07-01

    Fruit salads stored for different lengths of time as well as their images were used to estimate sensory shelf life by survival analysis. Shelf life estimates obtained using fruit salad images were longer than those achieved by analyzing the real product. This was attributed to the fact that images are 2-dimensional representations of real food, probably not comprehensive of all the visual information needed by the panelists to produce an acceptability/unacceptability judgment. Images were also subjected to image analysis and the analysis of the overall visual quality by a trained panel. These indices proved to be highly correlated to consumer rejection of the fruit salad and could be exploited for routine shelf life assessment of analogous products. To this regard, a failure criterion of 25% consumer rejection could be equivalent to a score 3 in a 5-point overall visual quality scale. Food images can be used to assess product shelf life. In the case of fruit salads, the overall visual quality assessed by a trained panel on product images and the percentage of brown pixels in digital images can be exploited to estimate shelf life corresponding to a selected consumer rejection. © 2012 Institute of Food Technologists®

  11. Visualization of evolving laser-generated structures by frequency domain tomography

    NASA Astrophysics Data System (ADS)

    Chang, Yenyu; Li, Zhengyan; Wang, Xiaoming; Zgadzaj, Rafal; Downer, Michael

    2011-10-01

    We introduce frequency domain tomography (FDT) for single-shot visualization of time-evolving refractive index structures (e.g. laser wakefields, nonlinear index structures) moving at light-speed. Previous researchers demonstrated single-shot frequency domain holography (FDH), in which a probe-reference pulse pair co- propagates with the laser-generated structure, to obtain snapshot-like images. However, in FDH, information about the structure's evolution is averaged. To visualize an evolving structure, we use several frequency domain streak cameras (FDSCs), in each of which a probe-reference pulse pair propagates at an angle to the propagation direction of the laser-generated structure. The combination of several FDSCs constitutes the FDT system. We will present experimental results for a 4-probe FDT system that has imaged the whole-beam self-focusing of a pump pulse propagating through glass in a single laser shot. Combining temporal and angle multiplexing methods, we successfully processed data from four probe pulses in one spectrometer in a single-shot. The output of data processing is a multi-frame movie of the self- focusing pulse. Our results promise the possibility of visualizing evolving laser wakefield structures that underlie laser-plasma accelerators used for multi-GeV electron acceleration.

  12. A Survey on Multimedia-Based Cross-Layer Optimization in Visual Sensor Networks

    PubMed Central

    Costa, Daniel G.; Guedes, Luiz Affonso

    2011-01-01

    Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks. PMID:22163908

  13. Nonvisual influences on visual-information processing in the superior colliculus.

    PubMed

    Stein, B E; Jiang, W; Wallace, M T; Stanford, T R

    2001-01-01

    Although visually responsive neurons predominate in the deep layers of the superior colliculus (SC), the majority of them also receive sensory inputs from nonvisual sources (i.e. auditory and/or somatosensory). Most of these 'multisensory' neurons are able to synthesize their cross-modal inputs and, as a consequence, their responses to visual stimuli can be profoundly enhanced or depressed in the presence of a nonvisual cue. Whether response enhancement or response depression is produced by this multisensory interaction is predictable based on several factors. These include: the organization of a neuron's visual and nonvisual receptive fields; the relative spatial relationships of the different stimuli (to their respective receptive fields and to one another); and whether or not the neuron is innervated by a select population of cortical neurons. The response enhancement or depression of SC neurons via multisensory integration has significant survival value via its profound impact on overt attentive/orientation behaviors. Nevertheless, these multisensory processes are not present at birth, and require an extensive period of postnatal maturation. It seems likely that the sensory experiences obtained during this period play an important role in crafting the processes underlying these multisensory interactions.

  14. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Sight over sound in the judgment of music performance.

    PubMed

    Tsay, Chia-Jung

    2013-09-03

    Social judgments are made on the basis of both visual and auditory information, with consequential implications for our decisions. To examine the impact of visual information on expert judgment and its predictive validity for performance outcomes, this set of seven experiments in the domain of music offers a conservative test of the relative influence of vision versus audition. People consistently report that sound is the most important source of information in evaluating performance in music. However, the findings demonstrate that people actually depend primarily on visual information when making judgments about music performance. People reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound. The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted relative to auditory information, even when sound is consciously valued as the core domain content.

  16. Sight over sound in the judgment of music performance

    PubMed Central

    Tsay, Chia-Jung

    2013-01-01

    Social judgments are made on the basis of both visual and auditory information, with consequential implications for our decisions. To examine the impact of visual information on expert judgment and its predictive validity for performance outcomes, this set of seven experiments in the domain of music offers a conservative test of the relative influence of vision versus audition. People consistently report that sound is the most important source of information in evaluating performance in music. However, the findings demonstrate that people actually depend primarily on visual information when making judgments about music performance. People reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound. The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted relative to auditory information, even when sound is consciously valued as the core domain content. PMID:23959902

  17. Visual Analytics for Law Enforcement: Deploying a Service-Oriented Analytic Framework for Web-based Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowson, Scott T.; Bruce, Joseph R.; Best, Daniel M.

    2009-04-14

    This paper presents key components of the Law Enforcement Information Framework (LEIF) that provides communications, situational awareness, and visual analytics tools in a service-oriented architecture supporting web-based desktop and handheld device users. LEIF simplifies interfaces and visualizations of well-established visual analytical techniques to improve usability. Advanced analytics capability is maintained by enhancing the underlying processing to support the new interface. LEIF development is driven by real-world user feedback gathered through deployments at three operational law enforcement organizations in the US. LEIF incorporates a robust information ingest pipeline supporting a wide variety of information formats. LEIF also insulates interface and analyticalmore » components from information sources making it easier to adapt the framework for many different data repositories.« less

  18. Data Visualization in Information Retrieval and Data Mining (SIG VIS).

    ERIC Educational Resources Information Center

    Efthimiadis, Efthimis

    2000-01-01

    Presents abstracts that discuss using data visualization for information retrieval and data mining, including immersive information space and spatial metaphors; spatial data using multi-dimensional matrices with maps; TREC (Text Retrieval Conference) experiments; users' information needs in cartographic information retrieval; and users' relevance…

  19. The shift-invariant discrete wavelet transform and application to speech waveform analysis.

    PubMed

    Enders, Jörg; Geng, Weihua; Li, Peijun; Frazier, Michael W; Scholl, David J

    2005-04-01

    The discrete wavelet transform may be used as a signal-processing tool for visualization and analysis of nonstationary, time-sampled waveforms. The highly desirable property of shift invariance can be obtained at the cost of a moderate increase in computational complexity, and accepting a least-squares inverse (pseudoinverse) in place of a true inverse. A new algorithm for the pseudoinverse of the shift-invariant transform that is easier to implement in array-oriented scripting languages than existing algorithms is presented together with self-contained proofs. Representing only one of the many and varied potential applications, a recorded speech waveform illustrates the benefits of shift invariance with pseudoinvertibility. Visualization shows the glottal modulation of vowel formants and frication noise, revealing secondary glottal pulses and other waveform irregularities. Additionally, performing sound waveform editing operations (i.e., cutting and pasting sections) on the shift-invariant wavelet representation automatically produces quiet, click-free section boundaries in the resulting sound. The capabilities of this wavelet-domain editing technique are demonstrated by changing the rate of a recorded spoken word. Individual pitch periods are repeated to obtain a half-speed result, and alternate individual pitch periods are removed to obtain a double-speed result. The original pitch and formant frequencies are preserved. In informal listening tests, the results are clear and understandable.

  20. Measurements and computational analysis of heat transfer and flow in a simulated turbine blade internal cooling passage

    NASA Technical Reports Server (NTRS)

    Russell, Louis M.; Thurman, Douglas R.; Simonyi, Patricia S.; Hippensteele, Steven A.; Poinsatte, Philip E.

    1993-01-01

    Visual and quantitative information was obtained on heat transfer and flow in a branched-duct test section that had several significant features of an internal cooling passage of a turbine blade. The objective of this study was to generate a set of experimental data that could be used to validate computer codes for internal cooling systems. Surface heat transfer coefficients and entrance flow conditions were measured at entrance Reynolds numbers of 45,000, 335,000, and 726,000. The heat transfer data were obtained using an Inconel heater sheet attached to the surface and coated with liquid crystals. Visual and quantitative flow field results using particle image velocimetry were also obtained for a plane at mid channel height for a Reynolds number of 45,000. The flow was seeded with polystyrene particles and illuminated by a laser light sheet. Computational results were determined for the same configurations and at matching Reynolds numbers; these surface heat transfer coefficients and flow velocities were computed with a commercially available code. The experimental and computational results were compared. Although some general trends did agree, there were inconsistencies in the temperature patterns as well as in the numerical results. These inconsistencies strongly suggest the need for further computational studies on complicated geometries such as the one studied.

  1. The shift-invariant discrete wavelet transform and application to speech waveform analysis

    NASA Astrophysics Data System (ADS)

    Enders, Jörg; Geng, Weihua; Li, Peijun; Frazier, Michael W.; Scholl, David J.

    2005-04-01

    The discrete wavelet transform may be used as a signal-processing tool for visualization and analysis of nonstationary, time-sampled waveforms. The highly desirable property of shift invariance can be obtained at the cost of a moderate increase in computational complexity, and accepting a least-squares inverse (pseudoinverse) in place of a true inverse. A new algorithm for the pseudoinverse of the shift-invariant transform that is easier to implement in array-oriented scripting languages than existing algorithms is presented together with self-contained proofs. Representing only one of the many and varied potential applications, a recorded speech waveform illustrates the benefits of shift invariance with pseudoinvertibility. Visualization shows the glottal modulation of vowel formants and frication noise, revealing secondary glottal pulses and other waveform irregularities. Additionally, performing sound waveform editing operations (i.e., cutting and pasting sections) on the shift-invariant wavelet representation automatically produces quiet, click-free section boundaries in the resulting sound. The capabilities of this wavelet-domain editing technique are demonstrated by changing the rate of a recorded spoken word. Individual pitch periods are repeated to obtain a half-speed result, and alternate individual pitch periods are removed to obtain a double-speed result. The original pitch and formant frequencies are preserved. In informal listening tests, the results are clear and understandable. .

  2. Disaster Emergency Rapid Assessment Based on Remote Sensing and Background Data

    NASA Astrophysics Data System (ADS)

    Han, X.; Wu, J.

    2018-04-01

    The period from starting to the stable conditions is an important stage of disaster development. In addition to collecting and reporting information on disaster situations, remote sensing images by satellites and drones and monitoring results from disaster-stricken areas should be obtained. Fusion of multi-source background data such as population, geography and topography, and remote sensing monitoring information can be used in geographic information system analysis to quickly and objectively assess the disaster information. According to the characteristics of different hazards, the models and methods driven by the rapid assessment of mission requirements are tested and screened. Based on remote sensing images, the features of exposures quickly determine disaster-affected areas and intensity levels, and extract key disaster information about affected hospitals and schools as well as cultivated land and crops, and make decisions after emergency response with visual assessment results.

  3. The Hype over Hyperbolic Browsers.

    ERIC Educational Resources Information Center

    Allen, Maryellen Mott

    2002-01-01

    Considers complaints about the usability in the human-computer interaction aspect of information retrieval and discusses information visualization, the Online Library of Information Visualization Environments, hyperbolic information structure, subject searching, real-world applications, relational databases and hyperbolic trees, and the future of…

  4. The Case of the Missing Visual Details: Occlusion and Long-Term Visual Memory

    ERIC Educational Resources Information Center

    Williams, Carrick C.; Burkle, Kyle A.

    2017-01-01

    To investigate the critical information in long-term visual memory representations of objects, we used occlusion to emphasize 1 type of information or another. By occluding 1 solid side of the object (e.g., top 50%) or by occluding 50% of the object with stripes (like a picket fence), we emphasized visible information about the object, processing…

  5. Enabling Efficient Intelligence Analysis in Degraded Environments

    DTIC Science & Technology

    2013-06-01

    Magnets Grid widget for multidimensional information exploration ; and a record browser of Visual Summary Cards widget for fast visual identification of...evolution analysis; a Magnets Grid widget for multi- dimensional information exploration ; and a record browser of Visual Summary Cards widget for fast...attention and inattentional blindness. It also explores and develops various techniques to represent information in a salient way and provide efficient

  6. An occlusion paradigm to assess the importance of the timing of the quiet eye fixation.

    PubMed

    Vine, Samuel J; Lee, Don Hyung; Walters-Symons, Rosanna; Wilson, Mark R

    2017-02-01

    The aim of the study was to explore the significance of the 'timing' of the quiet eye (QE), and the relative importance of late (online control) or early (pre-programming) visual information for accuracy. Twenty-seven skilled golfers completed a putting task using an occlusion paradigm with three conditions: early (prior to backswing), late (during putter stroke), and no (control) occlusion of vision. Performance, QE, and kinematic variables relating to the swing were measured. Results revealed that providing only early visual information (occluding late visual information) had a significant detrimental effect on performance and kinematic measures, compared to the control condition (no occlusion), despite QE durations being maintained. Conversely, providing only late visual information (occluding early visual information) was not significantly detrimental to performance or kinematics, with results similar to those in the control condition. These findings imply that the visual information extracted during movement execution - the late proportion of the QE - is critical when golf putting. The results challenge the predominant view that the QE serves only a pre-programming function. We propose that the different proportions of the QE (before and during movement) may serve different functions in supporting accuracy in golf putting.

  7. Modeling and visualizing borehole information on virtual globes using KML

    NASA Astrophysics Data System (ADS)

    Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing

    2014-01-01

    Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.

  8. The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.

    PubMed

    Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal

    2016-01-01

    Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.

  9. Web-GIS platform for monitoring and forecasting of regional climate and ecological changes

    NASA Astrophysics Data System (ADS)

    Gordov, E. P.; Krupchatnikov, V. N.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Shulgina, T. M.

    2012-12-01

    Growing volume of environmental data from sensors and model outputs makes development of based on modern information-telecommunication technologies software infrastructure for information support of integrated scientific researches in the field of Earth sciences urgent and important task (Gordov et al, 2012, van der Wel, 2005). It should be considered that original heterogeneity of datasets obtained from different sources and institutions not only hampers interchange of data and analysis results but also complicates their intercomparison leading to a decrease in reliability of analysis results. However, modern geophysical data processing techniques allow combining of different technological solutions for organizing such information resources. Nowadays it becomes a generally accepted opinion that information-computational infrastructure should rely on a potential of combined usage of web- and GIS-technologies for creating applied information-computational web-systems (Titov et al, 2009, Gordov et al. 2010, Gordov, Okladnikov and Titov, 2011). Using these approaches for development of internet-accessible thematic information-computational systems, and arranging of data and knowledge interchange between them is a very promising way of creation of distributed information-computation environment for supporting of multidiscipline regional and global research in the field of Earth sciences including analysis of climate changes and their impact on spatial-temporal vegetation distribution and state. Experimental software and hardware platform providing operation of a web-oriented production and research center for regional climate change investigations which combines modern web 2.0 approach, GIS-functionality and capabilities of running climate and meteorological models, large geophysical datasets processing, visualization, joint software development by distributed research groups, scientific analysis and organization of students and post-graduate students education is presented. Platform software developed (Shulgina et al, 2012, Okladnikov et al, 2012) includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also data preprocessing, run and visualization of modeling results of models WRF and «Planet Simulator» integrated into the platform is provided. All functions of the center are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of visualization of processing results, selection of geographical region of interest (pan and zoom) and data layers manipulation (order, enable/disable, features extraction). Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches (Shulgina et al, 2011). Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified graphical web-interface.

  10. WISP information display system user's manual

    NASA Technical Reports Server (NTRS)

    Alley, P. L.; Smith, G. R.

    1978-01-01

    The wind shears program (WISP) supports the collection of data on magnetic tape for permanent storage or analysis. The document structure provides: (1) the hardware and software configuration required to execute the WISP system and start up procedure from a power down condition; (2) data collection task, calculations performed on the incoming data, and a description of the magnetic tape format; (3) the data display task and examples of displays obtained from execution of the real time simulation program; and (4) the raw data dump task and examples of operator actions required to obtained the desired format. The procedures outlines herein will allow continuous data collection at the expense of real time visual displays.

  11. Transesophageal echocardiographic evaluation of baboons during microgravity induced by parabolic flight

    NASA Technical Reports Server (NTRS)

    Vernalis, Marina N.; Latham, Ricky D.; Fanton, John W.; Geffney, F. Andrew

    1993-01-01

    Transthoracic echocardiography (TTE) is a feasible method to noninvasively examine cardiac anatomy during parabolic flight. However, transducer placement on the chest wall is very difficult to maintain during transition to microgravity. In addition, TTE requires the use of low frequency transponders which limit resolution. Transesophical echocardiography (TEE) is an established imaging technique which obtains echocardiographic information from the esophagus. It is a safe procedure and provides higher quality images of cardiac structure than obtained with TTE. This study is designed to determine whether TEE was feasible to perform during parabolic flight and to determine whether acute central volume responses occur in acute transition to zero gravity by direct visualization of the cardiac chambers.

  12. Bandwidth Optimization On Design Of Visual Display Information System Based Networking At Politeknik Negeri Bali

    NASA Astrophysics Data System (ADS)

    Sudiartha, IKG; Catur Bawa, IGNB

    2018-01-01

    Information can not be separated from the social life of the community, especially in the world of education. One of the information fields is academic calendar information, activity agenda, announcement and campus activity news. In line with technological developments, text-based information is becoming obsolete. For that need creativity to present information more quickly, accurately and interesting by exploiting the development of digital technology and internet. In this paper will be developed applications for the provision of information in the form of visual display, applied to computer network system with multimedia applications. Network-based applications provide ease in updating data through internet services, attractive presentations with multimedia support. The application “Networking Visual Display Information Unit” can be used as a medium that provides information services for students and academic employee more interesting and ease in updating information than the bulletin board. The information presented in the form of Running Text, Latest Information, Agenda, Academic Calendar and Video provide an interesting presentation and in line with technological developments at the Politeknik Negeri Bali. Through this research is expected to create software “Networking Visual Display Information Unit” with optimal bandwidth usage by combining local data sources and data through the network. This research produces visual display design with optimal bandwidth usage and application in the form of supporting software.

  13. A Visual Profile of Queensland Indigenous Children.

    PubMed

    Hopkins, Shelley; Sampson, Geoff P; Hendicott, Peter L; Wood, Joanne M

    2016-03-01

    Little is known about the prevalence of refractive error, binocular vision, and other visual conditions in Australian Indigenous children. This is important given the association of these visual conditions with reduced reading performance in the wider population, which may also contribute to the suboptimal reading performance reported in this population. The aim of this study was to develop a visual profile of Queensland Indigenous children. Vision testing was performed on 595 primary schoolchildren in Queensland, Australia. Vision parameters measured included visual acuity, refractive error, color vision, nearpoint of convergence, horizontal heterophoria, fusional vergence range, accommodative facility, AC/A ratio, visual motor integration, and rapid automatized naming. Near heterophoria, nearpoint of convergence, and near fusional vergence range were used to classify convergence insufficiency (CI). Although refractive error (Indigenous, 10%; non-Indigenous, 16%; p = 0.04) and strabismus (Indigenous, 0%; non-Indigenous, 3%; p = 0.03) were significantly less common in Indigenous children, CI was twice as prevalent (Indigenous, 10%; non-Indigenous, 5%; p = 0.04). Reduced visual information processing skills were more common in Indigenous children (reduced visual motor integration [Indigenous, 28%; non-Indigenous, 16%; p < 0.01] and slower rapid automatized naming [Indigenous, 67%; non-Indigenous, 59%; p = 0.04]). The prevalence of visual impairment (reduced visual acuity) and color vision deficiency was similar between groups. Indigenous children have less refractive error and strabismus than their non-Indigenous peers. However, CI and reduced visual information processing skills were more common in this group. Given that vision screenings primarily target visual acuity assessment and strabismus detection, this is an important finding as many Indigenous children with CI and reduced visual information processing may be missed. Emphasis should be placed on identifying children with CI and reduced visual information processing given the potential effect of these conditions on school performance.

  14. An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.

    PubMed

    Brito da Silva, Leonardo Enzo; Wunsch, Donald C

    2018-06-01

    Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.

  15. Eye-Hand Synergy and Intermittent Behaviors during Target-Directed Tracking with Visual and Non-visual Information

    PubMed Central

    Huang, Chien-Ting; Hwang, Ing-Shiou

    2012-01-01

    Visual feedback and non-visual information play different roles in tracking of an external target. This study explored the respective roles of the visual and non-visual information in eleven healthy volunteers who coupled the manual cursor to a rhythmically moving target of 0.5 Hz under three sensorimotor conditions: eye-alone tracking (EA), eye-hand tracking with visual feedback of manual outputs (EH tracking), and the same tracking without such feedback (EHM tracking). Tracking error, kinematic variables, and movement intermittency (saccade and speed pulse) were contrasted among tracking conditions. The results showed that EHM tracking exhibited larger pursuit gain, less tracking error, and less movement intermittency for the ocular plant than EA tracking. With the vision of manual cursor, EH tracking achieved superior tracking congruency of the ocular and manual effectors with smaller movement intermittency than EHM tracking, except that the rate precision of manual action was similar for both types of tracking. The present study demonstrated that visibility of manual consequences altered mutual relationships between movement intermittency and tracking error. The speed pulse metrics of manual output were linked to ocular tracking error, and saccade events were time-locked to the positional error of manual tracking during EH tracking. In conclusion, peripheral non-visual information is critical to smooth pursuit characteristics and rate control of rhythmic manual tracking. Visual information adds to eye-hand synchrony, underlying improved amplitude control and elaborate error interpretation during oculo-manual tracking. PMID:23236498

  16. Motor-visual neurons and action recognition in social interactions.

    PubMed

    de la Rosa, Stephan; Bülthoff, Heinrich H

    2014-04-01

    Cook et al. suggest that motor-visual neurons originate from associative learning. This suggestion has interesting implications for the processing of socially relevant visual information in social interactions. Here, we discuss two aspects of the associative learning account that seem to have particular relevance for visual recognition of social information in social interactions - namely, context-specific and contingency based learning.

  17. The dissociations of visual processing of "hole" and "no-hole" stimuli: An functional magnetic resonance imaging study.

    PubMed

    Meng, Qianli; Huang, Yan; Cui, Ding; He, Lixia; Chen, Lin; Ma, Yuanye; Zhao, Xudong

    2018-05-01

    "Where to begin" is a fundamental question of vision. A "Global-first" topological approach proposed that the first step in object representation was to extract topological properties, especially whether the object had a hole or not. Numerous psychophysical studies found that the hole (closure) could be rapidly recognized by visual system as a primitive property. However, neuroimaging studies showed that the temporal lobe (IT), which lied at a late stage of ventral pathway, was involved as a dedicated region. It appeared paradoxical that IT served as a key region for processing the early component of visual information. Did there exist a distinct fast route to transit hole information to IT? We hypothesized that a fast noncortical pathway might participate in processing holes. To address this issue, a backward masking paradigm combined with functional magnetic resonance imaging (fMRI) was applied to measure neural responses to hole and no-hole stimuli in anatomically defined cortical and subcortical regions of interest (ROIs) under different visual awareness levels by modulating masking delays. For no-hole stimuli, the neural activation of cortical sites was greatly attenuated when the no-hole perception was impaired by strong masking, whereas an enhanced neural response to hole stimuli in non-cortical sites was obtained when the stimulus was rendered more invisible. The results suggested that whereas the cortical route was required to drive a perceptual response for no-hole stimuli, a subcortical route might be involved in coding the hole feature, resulting in a rapid hole perception in primitive vision.

  18. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  19. PolarBRDF: A general purpose Python package for visualization and quantitative analysis of multi-angular remote sensing measurements

    NASA Astrophysics Data System (ADS)

    Singh, Manoj K.; Gautam, Ritesh; Gatebe, Charles K.; Poudyal, Rajesh

    2016-11-01

    The Bidirectional Reflectance Distribution Function (BRDF) is a fundamental concept for characterizing the reflectance property of a surface, and helps in the analysis of remote sensing data from satellite, airborne and surface platforms. Multi-angular remote sensing measurements are required for the development and evaluation of BRDF models for improved characterization of surface properties. However, multi-angular data and the associated BRDF models are typically multidimensional involving multi-angular and multi-wavelength information. Effective visualization of such complex multidimensional measurements for different wavelength combinations is presently somewhat lacking in the literature, and could serve as a potentially useful research and teaching tool in aiding both interpretation and analysis of BRDF measurements. This article describes a newly developed software package in Python (PolarBRDF) to help visualize and analyze multi-angular data in polar and False Color Composite (FCC) forms. PolarBRDF also includes functionalities for computing important multi-angular reflectance/albedo parameters including spectral albedo, principal plane reflectance and spectral reflectance slope. Application of PolarBRDF is demonstrated using various case studies obtained from airborne multi-angular remote sensing measurements using NASA's Cloud Absorption Radiometer (CAR). Our visualization program also provides functionalities for untangling complex surface/atmosphere features embedded in pixel-based remote sensing measurements, such as the FCC imagery generation of BRDF measurements of grasslands in the presence of wildfire smoke and clouds. Furthermore, PolarBRDF also provides quantitative information of the angular distribution of scattered surface/atmosphere radiation, in the form of relevant BRDF variables such as sunglint, hotspot and scattering statistics.

  20. PolarBRDF: A general purpose Python package for visualization and quantitative analysis of multi-angular remote sensing measurements

    NASA Astrophysics Data System (ADS)

    Poudyal, R.; Singh, M.; Gautam, R.; Gatebe, C. K.

    2016-12-01

    The Bidirectional Reflectance Distribution Function (BRDF) is a fundamental concept for characterizing the reflectance property of a surface, and helps in the analysis of remote sensing data from satellite, airborne and surface platforms. Multi-angular remote sensing measurements are required for the development and evaluation of BRDF models for improved characterization of surface properties. However, multi-angular data and the associated BRDF models are typically multidimensional involving multi-angular and multi-wavelength information. Effective visualization of such complex multidimensional measurements for different wavelength combinations is presently somewhat lacking in the literature, and could serve as a potentially useful research and teaching tool in aiding both interpretation and analysis of BRDF measurements. This article describes a newly developed software package in Python (PolarBRDF) to help visualize and analyze multi-angular data in polar and False Color Composite (FCC) forms. PolarBRDF also includes functionalities for computing important multi-angular reflectance/albedo parameters including spectral albedo, principal plane reflectance and spectral reflectance slope. Application of PolarBRDF is demonstrated using various case studies obtained from airborne multi-angular remote sensing measurements using NASA's Cloud Absorption Radiometer (CAR)- http://car.gsfc.nasa.gov/. Our visualization program also provides functionalities for untangling complex surface/atmosphere features embedded in pixel-based remote sensing measurements, such as the FCC imagery generation of BRDF measurements of grasslands in the presence of wildfire smoke and clouds. Furthermore, PolarBRDF also provides quantitative information of the angular distribution of scattered surface/atmosphere radiation, in the form of relevant BRDF variables such as sunglint, hotspot and scattering statistics.

Top