Sample records for visual information acquired

  1. Social media interruption affects the acquisition of visually, not aurally, acquired information during a pathophysiology lecture.

    PubMed

    Marone, Jane R; Thakkar, Shivam C; Suliman, Neveen; O'Neill, Shannon I; Doubleday, Alison F

    2018-06-01

    Poor academic performance from extensive social media usage appears to be due to students' inability to multitask between distractions and academic work. However, the degree to which visually distracted students can acquire lecture information presented aurally is unknown. This study examined the ability of students visually distracted by social media to acquire information presented during a voice-over PowerPoint lecture, and to compare performance on examination questions derived from information presented aurally vs. that presented visually. Students ( n = 20) listened to a 42-min cardiovascular pathophysiology lecture containing embedded cartoons while taking notes. The experimental group ( n = 10) was visually, but not aurally, distracted by social media during times when cartoon information was presented, ~40% of total lecture time. Overall performance among distracted students on a follow-up, open-note quiz was 30% poorer than that for controls ( P < 0.001). When the modality of presentation (visual vs. aural) was compared, performance decreased on examination questions from information presented visually. However, performance on questions from information presented aurally was similar to that of controls. Our findings suggest the ability to acquire information during lecture may vary, depending on the degree of competition between the modalities of the distraction and the lecture presentation. Within the context of current literature, our findings also suggest that timing of the distraction relative to delivery of material examined affects performance more than total distraction time. Therefore, when delivering lectures, instructors should incorporate organizational cues and active learning strategies that assist students in maintaining focus and acquiring relevant information.

  2. Social Media Interruption Affects the Acquisition of Visually, Not Aurally, Acquired Information during a Pathophysiology Lecture

    ERIC Educational Resources Information Center

    Marone, Jane R.; Thakkar, Shivam C.; Suliman, Neveen; O'Neill, Shannon I.; Doubleday, Alison F.

    2018-01-01

    Poor academic performance from extensive social media usage appears to be due to students' inability to multitask between distractions and academic work. However, the degree to which visually distracted students can acquire lecture information presented aurally is unknown. This study examined the ability of students visually distracted by social…

  3. Acquired Codes of Meaning in Data Visualization and Infographics: Beyond Perceptual Primitives.

    PubMed

    Byrne, Lydia; Angus, Daniel; Wiles, Janet

    2016-01-01

    While information visualization frameworks and heuristics have traditionally been reluctant to include acquired codes of meaning, designers are making use of them in a wide variety of ways. Acquired codes leverage a user's experience to understand the meaning of a visualization. They range from figurative visualizations which rely on the reader's recognition of shapes, to conventional arrangements of graphic elements which represent particular subjects. In this study, we used content analysis to codify acquired meaning in visualization. We applied the content analysis to a set of infographics and data visualizations which are exemplars of innovative and effective design. 88% of the infographics and 71% of data visualizations in the sample contain at least one use of figurative visualization. Conventions on the arrangement of graphics are also widespread in the sample. In particular, a comparison of representations of time and other quantitative data showed that conventions can be specific to a subject. These results suggest that there is a need for information visualization research to expand its scope beyond perceptual channels, to include social and culturally constructed meaning. Our paper demonstrates a viable method for identifying figurative techniques and graphic conventions and integrating them into heuristics for visualization design.

  4. Information efficiency in visual communication

    NASA Astrophysics Data System (ADS)

    Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1993-08-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  5. Information efficiency in visual communication

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1993-01-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  6. Behavior Selection of Mobile Robot Based on Integration of Multimodal Information

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kaneko, Masahide

    Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.

  7. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  8. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  9. How scientists develop competence in visual communication

    NASA Astrophysics Data System (ADS)

    Ostergren, Marilyn

    Visuals (maps, charts, diagrams and illustrations) are an important tool for communication in most scientific disciplines, which means that scientists benefit from having strong visual communication skills. This dissertation examines the nature of competence in visual communication and the means by which scientists acquire this competence. This examination takes the form of an extensive multi-disciplinary integrative literature review and a series of interviews with graduate-level science students. The results are presented as a conceptual framework that lays out the components of competence in visual communication, including the communicative goals of science visuals, the characteristics of effective visuals, the skills and knowledge needed to create effective visuals and the learning experiences that promote the acquisition of these forms of skill and knowledge. This conceptual framework can be used to inform pedagogy and thus help graduate students achieve a higher level of competency in this area; it can also be used to identify aspects of acquiring competence in visual communication that need further study.

  10. Mapping the navigational knowledge of individually foraging ants, Myrmecia croslandi

    PubMed Central

    Narendra, Ajay; Gourmaud, Sarah; Zeil, Jochen

    2013-01-01

    Ants are efficient navigators, guided by path integration and visual landmarks. Path integration is the primary strategy in landmark-poor habitats, but landmarks are readily used when available. The landmark panorama provides reliable information about heading direction, routes and specific location. Visual memories for guidance are often acquired along routes or near to significant places. Over what area can such locally acquired memories provide information for reaching a place? This question is unusually approachable in the solitary foraging Australian jack jumper ant, since individual foragers typically travel to one or two nest-specific foraging trees. We find that within 10 m from the nest, ants both with and without home vector information available from path integration return directly to the nest from all compass directions, after briefly scanning the panorama. By reconstructing panoramic views within the successful homing range, we show that in the open woodland habitat of these ants, snapshot memories acquired close to the nest provide sufficient navigational information to determine nest-directed heading direction over a surprisingly large area, including areas that animals may have not visited previously. PMID:23804615

  11. Photogrammetry for Archaeology: Collecting Pieces Together

    NASA Astrophysics Data System (ADS)

    Chibunichev, A. G.; Knyaz, V. A.; Zhuravlev, D. V.; Kurkov, V. M.

    2018-05-01

    The complexity of retrieving and understanding the archaeological data requires to apply different techniques, tools and sensors for information gathering, processing and documenting. Archaeological research now has the interdisciplinary nature involving technologies based on different physical principles for retrieving information about archaeological findings. The important part of archaeological data is visual and spatial information which allows reconstructing the appearance of the findings and relation between them. Photogrammetry has a great potential for accurate acquiring of spatial and visual data of different scale and resolution allowing to create archaeological documents of new type and quality. The aim of the presented study is to develop an approach for creating new forms of archaeological documents, a pipeline for their producing and collecting in one holistic model, describing an archaeological site. A set of techniques is developed for acquiring and integration of spatial and visual data of different level of details. The application of the developed techniques is demonstrated for documenting of Bosporus archaeological expedition of Russian State Historical Museum.

  12. What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    PubMed

    Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad

    2015-03-01

    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.

  13. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.

    PubMed

    Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E

    2010-11-01

    Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Parallel and Serial Processes in Visual Search

    ERIC Educational Resources Information Center

    Thornton, Thomas L.; Gilden, David L.

    2007-01-01

    A long-standing issue in the study of how people acquire visual information centers around the scheduling and deployment of attentional resources: Is the process serial, or is it parallel? A substantial empirical effort has been dedicated to resolving this issue. However, the results remain largely inconclusive because the methodologies that have…

  15. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    PubMed Central

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (< 50 years) normally sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  16. Visual perception and social foraging in birds.

    PubMed

    Fernández-Juricic, Esteban; Erichsen, Jonathan T; Kacelnik, Alex

    2004-01-01

    Birds gather information about their environment mainly through vision by scanning their surroundings. Many prevalent models of social foraging assume that foraging and scanning are mutually exclusive. Although this assumption is valid for birds with narrow visual fields, these models have also been applied to species with wide fields. In fact, available models do not make precise predictions for birds with large visual fields, in which the head-up, head-down dichotomy is not accurate and, moreover, do not consider the effects of detection distance and limited attention. Studies of how different types of visual information are acquired as a function of body posture and of how information flows within flocks offer new insights into the costs and benefits of living in groups.

  17. An infrared/video fusion system for military robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.W.; Roberts, R.S.

    1997-08-05

    Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less

  18. Terminology model discovery using natural language processing and visualization techniques.

    PubMed

    Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol

    2006-12-01

    Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.

  19. An Acquired Deficit of Audiovisual Speech Processing

    ERIC Educational Resources Information Center

    Hamilton, Roy H.; Shenton, Jeffrey T.; Coslett, H. Branch

    2006-01-01

    We report a 53-year-old patient (AWF) who has an acquired deficit of audiovisual speech integration, characterized by a perceived temporal mismatch between speech sounds and the sight of moving lips. AWF was less accurate on an auditory digit span task with vision of a speaker's face as compared to a condition in which no visual information from…

  20. The singular nature of auditory and visual scene analysis in autism

    PubMed Central

    Lin, I.-Fan; Shirama, Aya; Kato, Nobumasa

    2017-01-01

    Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals. This article is part of the themed issue ‘Auditory and visual scene analysis'. PMID:28044025

  1. The Assessment of Professional Standard Competence of Teachers of Students with Visual Impairments

    ERIC Educational Resources Information Center

    Lee, Hae-Gyun; Kim, Jung-Hyun; Kang, Jong-Gu

    2008-01-01

    The purpose of this study was to assess the level of competence needed for teachers of the visually impaired. The assessment was based on Professional Standard Competence developed by the Council for Exceptional Children (CEC) for special education teachers in 2001. The researchers used questionnaires to acquire information about 190 South Korean…

  2. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    PubMed

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  3. Impaired capacity of cerebellar patients to perceive and learn two-dimensional shapes based on kinesthetic cues.

    PubMed

    Shimansky, Y; Saling, M; Wunderlich, D A; Bracha, V; Stelmach, G E; Bloedel, J R

    1997-01-01

    This study addresses the issue of the role of the cerebellum in the processing of sensory information by determining the capability of cerebellar patients to acquire and use kinesthetic cues received via the active or passive tracing of an irregular shape while blindfolded. Patients with cerebellar lesions and age-matched healthy controls were tested on four tasks: (1) learning to discriminate a reference shape from three others through the repeated tracing of the reference template; (2) reproducing the reference shape from memory by drawing blindfolded; (3) performing the same task with vision; and (4) visually recognizing the reference shape. The cues used to acquire and then to recognize the reference shape were generated under four conditions: (1) "active kinesthesia," in which cues were acquired by the blindfolded subject while actively tracing a reference template; (2) "passive kinesthesia," in which the tracing was performed while the hand was guided passively through the template; (3) "sequential vision," in which the shape was visualized by the serial exposure of small segments of its outline; and (4) "full vision," in which the entire shape was visualized. The sequential vision condition was employed to emulate the sequential way in which kinesthetic information is acquired while tracing the reference shape. The results demonstrate a substantial impairment of cerebellar patients in their capability to perceive two-dimensional irregular shapes based only on kinesthetic cues. There also is evidence that this deficit in part relates to a reduced capacity to integrate temporal sequences of sensory cues into a complete image useful for shape discrimination tasks or for reproducing the shape through drawing. Consequently, the cerebellum has an important role in this type of sensory information processing even when it is not directly associated with the execution of movements.

  4. Patient-tailored multimodal neuroimaging, visualization and quantification of human intra-cerebral hemorrhage

    NASA Astrophysics Data System (ADS)

    Goh, Sheng-Yang M.; Irimia, Andrei; Vespa, Paul M.; Van Horn, John D.

    2016-03-01

    In traumatic brain injury (TBI) and intracerebral hemorrhage (ICH), the heterogeneity of lesion sizes and types necessitates a variety of imaging modalities to acquire a comprehensive perspective on injury extent. Although it is advantageous to combine imaging modalities and to leverage their complementary benefits, there are difficulties in integrating information across imaging types. Thus, it is important that efforts be dedicated to the creation and sustained refinement of resources for multimodal data integration. Here, we propose a novel approach to the integration of neuroimaging data acquired from human patients with TBI/ICH using various modalities; we also demonstrate the integrated use of multimodal magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) data for TBI analysis based on both visual observations and quantitative metrics. 3D models of healthy-appearing tissues and TBIrelated pathology are generated, both of which are derived from multimodal imaging data. MRI volumes acquired using FLAIR, SWI, and T2 GRE are used to segment pathology. Healthy tissues are segmented using user-supervised tools, and results are visualized using a novel graphical approach called a `connectogram', where brain connectivity information is depicted within a circle of radially aligned elements. Inter-region connectivity and its strength are represented by links of variable opacities drawn between regions, where opacity reflects the percentage longitudinal change in brain connectivity density. Our method for integrating, analyzing and visualizing structural brain changes due to TBI and ICH can promote knowledge extraction and enhance the understanding of mechanisms underlying recovery.

  5. The next public health revolution: public health information fusion and social networks.

    PubMed

    Khan, Ali S; Fleischauer, Aaron; Casani, Julie; Groseclose, Samuel L

    2010-07-01

    Social, political, and economic disruptions caused by natural and human-caused public health emergencies have catalyzed public health efforts to expand the scope of biosurveillance and increase the timeliness, quality, and comprehensiveness of disease detection, alerting, response, and prediction. Unfortunately, efforts to acquire, render, and visualize the diversity of health intelligence information are hindered by its wide distribution across disparate fields, multiple levels of government, and the complex interagency environment. Achieving this new level of situation awareness within public health will require a fundamental cultural shift in methods of acquiring, analyzing, and disseminating information. The notion of information "fusion" may provide opportunities to expand data access, analysis, and information exchange to better inform public health action.

  6. The Next Public Health Revolution: Public Health Information Fusion and Social Networks

    PubMed Central

    Fleischauer, Aaron; Casani, Julie; Groseclose, Samuel L.

    2010-01-01

    Social, political, and economic disruptions caused by natural and human-caused public health emergencies have catalyzed public health efforts to expand the scope of biosurveillance and increase the timeliness, quality, and comprehensiveness of disease detection, alerting, response, and prediction. Unfortunately, efforts to acquire, render, and visualize the diversity of health intelligence information are hindered by its wide distribution across disparate fields, multiple levels of government, and the complex interagency environment. Achieving this new level of situation awareness within public health will require a fundamental cultural shift in methods of acquiring, analyzing, and disseminating information. The notion of information “fusion” may provide opportunities to expand data access, analysis, and information exchange to better inform public health action. PMID:20530760

  7. When kinesthetic information is neglected in learning a Novel bimanual rhythmic coordination.

    PubMed

    Zhu, Qin; Mirich, Todd; Huang, Shaochen; Snapp-Childs, Winona; Bingham, Geoffrey P

    2017-08-01

    Many studies have shown that rhythmic interlimb coordination involves perception of the coupled limb movements, and different sensory modalities can be used. Using visual displays to inform the coupled bimanual movement, novel bimanual coordination patterns can be learned with practice. A recent study showed that similar learning occurred without vision when a coach provided manual guidance during practice. The information provided via the two different modalities may be same (amodal) or different (modality specific). If it is different, then learning with both is a dual task, and one source of information might be used in preference to the other in performing the task when both are available. In the current study, participants learned a novel 90° bimanual coordination pattern without or with visual information in addition to kinesthesis. In posttest, all participants were tested without and with visual information in addition to kinesthesis. When tested with visual information, all participants exhibited performance that was significantly improved by practice. When tested without visual information, participants who practiced using only kinesthetic information showed improvement, but those who practiced with visual information in addition showed remarkably less improvement. The results indicate that (1) the information is not amodal, (2) use of a single type of information was preferred, and (3) the preferred information was visual. We also hypothesized that older participants might be more likely to acquire dual task performance given their greater experience of the two sensory modes in combination, but results were replicated with both 20- and 50-year-olds.

  8. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  9. Exploiting visual search theory to infer social interactions

    NASA Astrophysics Data System (ADS)

    Rota, Paolo; Dang-Nguyen, Duc-Tien; Conci, Nicola; Sebe, Nicu

    2013-03-01

    In this paper we propose a new method to infer human social interactions using typical techniques adopted in literature for visual search and information retrieval. The main piece of information we use to discriminate among different types of interactions is provided by proxemics cues acquired by a tracker, and used to distinguish between intentional and casual interactions. The proxemics information has been acquired through the analysis of two different metrics: on the one hand we observe the current distance between subjects, and on the other hand we measure the O-space synergy between subjects. The obtained values are taken at every time step over a temporal sliding window, and processed in the Discrete Fourier Transform (DFT) domain. The features are eventually merged into an unique array, and clustered using the K-means algorithm. The clusters are reorganized using a second larger temporal window into a Bag Of Words framework, so as to build the feature vector that will feed the SVM classifier.

  10. Multisource data fusion for documenting archaeological sites

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir; Chibunichev, Alexander; Zhuravlev, Denis

    2017-10-01

    The quality of archaeological sites documenting is of great importance for cultural heritage preserving and investigating. The progress in developing new techniques and systems for data acquisition and processing creates an excellent basis for achieving a new quality of archaeological sites documenting and visualization. archaeological data has some specific features which have to be taken into account when acquiring, processing and managing. First of all, it is a needed to gather as full as possible information about findings providing no loss of information and no damage to artifacts. Remote sensing technologies are the most adequate and powerful means which satisfy this requirement. An approach to archaeological data acquiring and fusion based on remote sensing is proposed. It combines a set of photogrammetric techniques for obtaining geometrical and visual information at different scales and detailing and a pipeline for archaeological data documenting, structuring, fusion, and analysis. The proposed approach is applied for documenting of Bosporus archaeological expedition of Russian State Historical Museum.

  11. Neuronal Representation of Ultraviolet Visual Stimuli in Mouse Primary Visual Cortex

    PubMed Central

    Tan, Zhongchao; Sun, Wenzhi; Chen, Tsai-Wen; Kim, Douglas; Ji, Na

    2015-01-01

    The mouse has become an important model for understanding the neural basis of visual perception. Although it has long been known that mouse lens transmits ultraviolet (UV) light and mouse opsins have absorption in the UV band, little is known about how UV visual information is processed in the mouse brain. Using a custom UV stimulation system and in vivo calcium imaging, we characterized the feature selectivity of layer 2/3 neurons in mouse primary visual cortex (V1). In adult mice, a comparable percentage of the neuronal population responds to UV and visible stimuli, with similar pattern selectivity and receptive field properties. In young mice, the orientation selectivity for UV stimuli increased steadily during development, but not direction selectivity. Our results suggest that, by expanding the spectral window through which the mouse can acquire visual information, UV sensitivity provides an important component for mouse vision. PMID:26219604

  12. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  13. Brain processing of visual information during fast eye movements maintains motor performance.

    PubMed

    Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis

    2013-01-01

    Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.

  14. 'Visual’ parsing can be taught quickly without visual experience during critical periods

    PubMed Central

    Reich, Lior; Amedi, Amir

    2015-01-01

    Cases of invasive sight-restoration in congenital blind adults demonstrated that acquiring visual abilities is extremely challenging, presumably because visual-experience during critical-periods is crucial for learning visual-unique concepts (e.g. size constancy). Visual rehabilitation can also be achieved using sensory-substitution-devices (SSDs) which convey visual information non-invasively through sounds. We tested whether one critical concept – visual parsing, which is highly-impaired in sight-restored patients – can be learned using SSD. To this end, congenitally blind adults participated in a unique, relatively short (~70 hours), SSD-‘vision’ training. Following this, participants successfully parsed 2D and 3D visual objects. Control individuals naïve to SSDs demonstrated that while some aspects of parsing with SSD are intuitive, the blind’s success could not be attributed to auditory processing alone. Furthermore, we had a unique opportunity to compare the SSD-users’ abilities to those reported for sight-restored patients who performed similar tasks visually, and who had months of eyesight. Intriguingly, the SSD-users outperformed the patients on most criteria tested. These suggest that with adequate training and technologies, key high-order visual features can be quickly acquired in adulthood, and lack of visual-experience during critical-periods can be somewhat compensated for. Practically, these highlight the potential of SSDs as standalone-aids or combined with invasive restoration approaches. PMID:26482105

  15. Value-driven attentional capture in the auditory domain.

    PubMed

    Anderson, Brian A

    2016-01-01

    It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.

  16. Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.

    PubMed

    Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei

    2016-05-01

    Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.

  17. Standing postural reaction to visual and proprioceptive stimulation in chronic acquired demyelinating polyneuropathy.

    PubMed

    Provost, Clement P; Tasseel-Ponche, Sophie; Lozeron, Pierre; Piccinini, Giulia; Quintaine, Victorine; Arnulf, Bertrand; Kubis, Nathalie; Yelnik, Alain P

    2018-02-28

    To investigate the weight of visual and proprioceptive inputs, measured indirectly in standing position control, in patients with chronic acquired demyelinating polyneuropathy (CADP). Prospective case study. Twenty-five patients with CADP and 25 healthy controls. Posture was recorded on a double force platform. Stimulations were optokinetic (60°/s) for visual input and vibration (50 Hz) for proprioceptive input. Visual stimulation involved 4 tests (upward, downward, rightward and leftward) and proprioceptive stimulation 2 tests (triceps surae and tibialis anterior). A composite score, previously published and slightly modified, was used for the recorded postural signals from the different stimulations. Despite their sensitivity deficits, patients with CADP were more sensitive to proprioceptive stimuli than were healthy controls (mean composite score 13.9 ((standard deviation; SD) 4.8) vs 18.4 (SD 4.8), p = 0.002). As expected, they were also more sensitive to visual stimuli (mean composite score 10.5 (SD 8.7) vs 22.9 (SD 7.5), p <0.0001). These results encourage balance rehabilitation of patients with CADP, aimed at promoting the use of proprioceptive information, thereby reducing too-early development of visual compensation while proprioception is still available.

  18. The primate amygdala represents the positive and negative value of visual stimuli during learning

    PubMed Central

    Paton, Joseph J.; Belova, Marina A.; Morrison, Sara E.; Salzman, C. Daniel

    2008-01-01

    Visual stimuli can acquire positive or negative value through their association with rewards and punishments, a process called reinforcement learning. Although we now know a great deal about how the brain analyses visual information, we know little about how visual representations become linked with values. To study this process, we turned to the amygdala, a brain structure implicated in reinforcement learning1–5. We recorded the activity of individual amygdala neurons in monkeys while abstract images acquired either positive or negative value through conditioning. After monkeys had learned the initial associations, we reversed image value assignments. We examined neural responses in relation to these reversals in order to estimate the relative contribution to neural activity of the sensory properties of images and their conditioned values. Here we show that changes in the values of images modulate neural activity, and that this modulation occurs rapidly enough to account for, and correlates with, monkeys’ learning. Furthermore, distinct populations of neurons encode the positive and negative values of visual stimuli. Behavioural and physiological responses to visual stimuli may therefore be based in part on the plastic representation of value provided by the amygdala. PMID:16482160

  19. Crossmodal association of auditory and visual material properties in infants.

    PubMed

    Ujiie, Yuta; Yamashita, Wakayo; Fujisaki, Waka; Kanazawa, So; Yamaguchi, Masami K

    2018-06-18

    The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.

  20. EyeMusic: Introducing a "visual" colorful experience for the blind using auditory sensory substitution.

    PubMed

    Abboud, Sami; Hanassy, Shlomi; Levy-Tzedek, Shelly; Maidenbaum, Shachar; Amedi, Amir

    2014-01-01

    Sensory-substitution devices (SSDs) provide auditory or tactile representations of visual information. These devices often generate unpleasant sensations and mostly lack color information. We present here a novel SSD aimed at addressing these issues. We developed the EyeMusic, a novel visual-to-auditory SSD for the blind, providing both shape and color information. Our design uses musical notes on a pentatonic scale generated by natural instruments to convey the visual information in a pleasant manner. A short behavioral protocol was utilized to train the blind to extract shape and color information, and test their acquired abilities. Finally, we conducted a survey and a comparison task to assess the pleasantness of the generated auditory stimuli. We show that basic shape and color information can be decoded from the generated auditory stimuli. High performance levels were achieved by all participants following as little as 2-3 hours of training. Furthermore, we show that users indeed found the stimuli pleasant and potentially tolerable for prolonged use. The novel EyeMusic algorithm provides an intuitive and relatively pleasant way for the blind to extract shape and color information. We suggest that this might help facilitating visual rehabilitation because of the added functionality and enhanced pleasantness.

  1. Force sensor attachable to thin fiberscopes/endoscopes utilizing high elasticity fabric.

    PubMed

    Watanabe, Tetsuyou; Iwai, Takanobu; Fujihira, Yoshinori; Wakako, Lina; Kagawa, Hiroyuki; Yoneyama, Takeshi

    2014-03-12

    An endoscope/fiberscope is a minimally invasive tool used for directly observing tissues in areas deep inside the human body where access is limited. However, this tool only yields visual information. If force feedback information were also available, endoscope/fiberscope operators would be able to detect indurated areas that are visually hard to recognize. Furthermore, obtaining such feedback information from tissues in areas where collecting visual information is a challenge would be highly useful. The major obstacle is that such force information is difficult to acquire. This paper presents a novel force sensing system that can be attached to a very thin fiberscope/endoscope. To ensure a small size, high resolution, easy sterilization, and low cost, the proposed force visualization-based system uses a highly elastic material-panty stocking fabric. The paper also presents the methodology for deriving the force value from the captured image. The system has a resolution of less than 0.01 N and sensitivity of greater than 600 pixels/N within the force range of 0-0.2 N.

  2. Visual discrimination transfer and modulation by biogenic amines in honeybees.

    PubMed

    Vieira, Amanda Rodrigues; Salles, Nayara; Borges, Marco; Mota, Theo

    2018-05-10

    For more than a century, visual learning and memory have been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals cannot be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of this, different attempts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performance remains considerably poorer than that for free-flying animals. Here, we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in a Y-maze. Classical conditioning and Y-maze retention tests were performed using the same pair of perceptually isoluminant chromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+), thus showing that visual memories acquired by honeybees are resistant to context changes between conditioning and the retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performance, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insight into cognitive and neurobiological mechanisms underlying visual learning in honeybees. © 2018. Published by The Company of Biologists Ltd.

  3. Supervised guiding long-short term memory for image caption generation based on object classes

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan

    2018-03-01

    The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.

  4. Design and Construction of a Portable Oculometer for Use in Transportation Oriented Human Factors Studies

    DOT National Transportation Integrated Search

    1971-08-01

    THE REPORT DESCRIBES DEVELOPMENT OF AN INSTRUMENT DESIGNED TO ACQUIRE AND PROCESS INFORMATION ABOUT HUMAN VISUAL PERFORMANCE. THE INSTRUMENT HAS THE FOLLOWING FEATURES: IT CAN BE OPERATED IN A VARIETY OF TRANSPORTATION ENVIRONMENTS INCLUDING SIMULATO...

  5. Pilot visual acquisition of traffic : operational communications from air traffic control operational communication.

    DOT National Transportation Integrated Search

    2001-05-01

    Avionics devices designed to provide pilots with graphically displayed traffic information will enable pilots to acquire and verify the identity of any intruder aircraft within the general area, either before or in accordance with a controller-issued...

  6. An Active Vision Approach to Understanding and Improving Visual Training in the Geosciences

    NASA Astrophysics Data System (ADS)

    Voronov, J.; Tarduno, J. A.; Jacobs, R. A.; Pelz, J. B.; Rosen, M. R.

    2009-12-01

    Experience in the field is a fundamental aspect of geologic training, and its effectiveness is largely unchallenged because of anecdotal evidence of its success among expert geologists. However, there have been only a few quantitative studies based on large data collection efforts to investigate how Earth Scientists learn in the field. In a recent collaboration between Earth scientists, Cognitive scientists and experts in Imaging science at the University of Rochester and Rochester Institute of Technology, we are investigating such a study. Within Cognitive Science, one school of thought, referred to as the Active Vision approach, emphasizes that visual perception is an active process requiring us to move our eyes to acquire new information about our environment. The Active Vision approach indicates the perceptual skills which experts possess and which novices will need to acquire to achieve expert performance. We describe data collection efforts using portable eye-trackers to assess how novice and expert geologists acquire visual knowledge in the field. We also discuss our efforts to collect images for use in a semi-immersive classroom environment, useful for further testing of novices and experts using eye-tracking technologies.

  7. A Conceptual Model of the Cognitive Processing of Environmental Distance Information

    NASA Astrophysics Data System (ADS)

    Montello, Daniel R.

    I review theories and research on the cognitive processing of environmental distance information by humans, particularly that acquired via direct experience in the environment. The cognitive processes I consider for acquiring and thinking about environmental distance information include working-memory, nonmediated, hybrid, and simple-retrieval processes. Based on my review of the research literature, and additional considerations about the sources of distance information and the situations in which it is used, I propose an integrative conceptual model to explain the cognitive processing of distance information that takes account of the plurality of possible processes and information sources, and describes conditions under which particular processes and sources are likely to operate. The mechanism of summing vista distances is identified as widely important in situations with good visual access to the environment. Heuristics based on time, effort, or other information are likely to play their most important role when sensory access is restricted.

  8. Simultaneous visualization of transonic buffet on a rocket faring model using unsteady PSP measurement and Schlieren method

    NASA Astrophysics Data System (ADS)

    Nakakita, K.

    2017-02-01

    Simultaneous visualization technique of the combination of the unsteady Pressure-Sensitive Paint and the Schlieren measurement was introduced. It was applied to a wind tunnel test of a rocket faring model at the JAXA 2mx2m transonic wind tunnel. Quantitative unsteady pressure field was acquired by the unsteady PSP measurement, which consisted of a high-speed camera, high-power laser diode, and so on. Qualitative flow structure was acquired by the Schlieren measurement using a high-speed camera and Xenon lamp with a blue optical filter. Simultaneous visualization was achieved 1.6 kfps frame rate and it gave the detailed structure of unsteady flow fields caused by the unsteady shock wave oscillation due to shock-wave/boundary-layer interaction around the juncture between cone and cylinder on the model. Simultaneous measurement results were merged into a movie including surface pressure distribution on the rocket faring and spatial structure of shock wave system concerning to transonic buffet. Constructed movie gave a timeseries and global information of transonic buffet flow field on the rocket faring model visually.

  9. Learning feedback and feedforward control in a mirror-reversed visual environment.

    PubMed

    Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi; Diedrichsen, Jörn

    2015-10-01

    When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. Copyright © 2015 the American Physiological Society.

  10. Learning feedback and feedforward control in a mirror-reversed visual environment

    PubMed Central

    Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi

    2015-01-01

    When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. PMID:26245313

  11. Acquisition and Visualization Techniques of Human Motion Using Master-Slave System and Haptograph

    NASA Astrophysics Data System (ADS)

    Katsura, Seiichiro; Ohishi, Kiyoshi

    Artificial acquisition and reproduction of human sensations are basic technologies of communication engineering. For example, auditory information is obtained by a microphone, and a speaker reproduces it by artificial means. Furthermore, a video camera and a television make it possible to transmit visual sensation by broadcasting. On the contrary, since tactile or haptic information is subject to the Newton's “law of action and reaction” in the real world, a device which acquires, transmits, and reproduces the information has not been established. From the point of view, real-world haptics is the key technology for future haptic communication engineering. This paper proposes a novel acquisition method of haptic information named “haptograph”. The haptograph visualizes the haptic information like photograph. Since temporal and spatial analyses are conducted to represent haptic information as the haptograph, it is possible to be recognized and to be evaluated intuitively. In this paper, the proposed haptograph is applied to visualization of human motion. It is possible to represent the motion characteristics, the expert's skill and the personal habit, and so on. In other words, a personal encyclopedia is attained. Once such a personal encyclopedia is stored in ubiquitous environment, the future human support technology will be developed.

  12. Acquiring Semantically Meaningful Models for Robotic Localization, Mapping and Target Recognition

    DTIC Science & Technology

    2014-12-21

    information, including suggesstions for reducing this burden, to Washington Headquarters Services , Directorate for Information Operations and Reports, 1215...Representations • Point features tracking • Recovery of relative motion, visual odometry • Loop closure • Environment models, sparse clouds of points...that co- occur with the object of interest Chair-Background Table-Background Object Level Segmentation Jaccard Index Silber .[5] 15.12 RenFox[4

  13. Target dependence of orientation and direction selectivity of corticocortical projection neurons in the mouse V1

    PubMed Central

    Matsui, Teppei; Ohki, Kenichi

    2013-01-01

    Higher order visual areas that receive input from the primary visual cortex (V1) are specialized for the processing of distinct features of visual information. However, it is still incompletely understood how this functional specialization is acquired. Here we used in vivo two photon calcium imaging in the mouse visual cortex to investigate whether this functional distinction exists at as early as the level of projections from V1 to two higher order visual areas, AL and LM. Specifically, we examined whether sharpness of orientation and direction selectivity and optimal spatial and temporal frequency of projection neurons from V1 to higher order visual areas match with that of target areas. We found that the V1 input to higher order visual areas were indeed functionally distinct: AL preferentially received inputs from V1 that were more orientation and direction selective and tuned for lower spatial frequency compared to projection of V1 to LM, consistent with functional differences between AL and LM. The present findings suggest that selective projections from V1 to higher order visual areas initiates parallel processing of sensory information in the visual cortical network. PMID:24068987

  14. Two-year-olds can begin to acquire verb meanings in socially impoverished contexts.

    PubMed

    Arunachalam, Sudha

    2013-12-01

    By two years of age, toddlers are adept at recruiting social, observational, and linguistic cues to discover the meanings of words. Here, we ask how they fare in impoverished contexts in which linguistic cues are provided, but no social or visual information is available. Novel verbs are presented in a stream of syntactically informative sentences, but the sentences are not embedded in a social context, and no visual access to the verb's referent is provided until the test phase. The results provide insight into how toddlers may benefit from overhearing contexts in which they are not directly attending to the ambient speech, and in which no conversational context, visual referent, or child-directed conversation is available. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. An amodal shared resource model of language-mediated visual attention

    PubMed Central

    Smith, Alastair C.; Monaghan, Padraic; Huettig, Falk

    2013-01-01

    Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. PMID:23966967

  16. Application of hyperspectral imaging for characterization of intramuscular fat distribution in beef

    USDA-ARS?s Scientific Manuscript database

    In this study, a hyperspectral imaging system in the spectral region of 400–1000 nm was used for visualization and determination of intramuscular fat concentration in beef samples. Hyperspectral images were acquired for beef samples, and spectral information was then extracted from each single sampl...

  17. Active sensing in the categorization of visual patterns

    PubMed Central

    Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546

  18. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Frequency of Behçet's disease among a group of visually impaired adults.

    PubMed

    Uyaroglu, Oguz Abdullah; Seyhoglu, Emrah; Erden, Abdulsamet; Vahabov, Cevanşir; Babaoglu, Hakan; Armagan, Berkan; Sari, Alper; Kilic, Levent; Tatar, Olcay; Bilgen, Sule Apras; Karadag, Omer; Kalyoncu, Umut

    2018-03-09

    Behçet's disease (BD) is one of the reasons of acquired visual impairment among young adults. Ocular involvement is a significant cause of disability in BD. The objective of this study is to assess the prevalence of BD among a group of adults who has visual impairment. Ankara Metropolitan Municipality Education and Technology Center is one of the official institutions which records and follows the demographic data of visually impaired people in Turkey. In November 2014, there were 675 visually impaired people recorded at this center. Medical history was taken from 294 adults by phone in November and December of 2014. Participants were asked if the visual impairment had been either acquired or congenital. If the patients had BD or suspicious BD, they were recalled for detailed investigation which would be carried out by an internist, a rheumatologist and an ophthalmologist. Two hundred thirteen of 294 (72.4%) visually impaired adults were male. One hundred nine of 294 (37.1%) had acquired visual impairment. Six (5.5%) of those 109 patients had BD. Overall prevalence of BD among study group was 2.04%.The median age of people with BD was 35 years. The median age at BD diagnosis was 16.5 years and the median duration from diagnosis to visual loss was 2.5 years. BD is still one of the causes of acquired visual impairment in Turkey. In this study, BD prevalence among a visually impaired adult group was 2.04%. BD accounted for 5.5% among adults who had acquired visual impairment. In a study of 1965, BD prevalence among people with acquired blindness was 12%. However, this study was conducted at pre-immunosuppressive period. Our prevalence is obviously lower than those studies. Extended population-based studies are needed for population estimations.

  20. Resources for Designing, Selecting and Teaching with Visualizations in the Geoscience Classroom

    NASA Astrophysics Data System (ADS)

    Kirk, K. B.; Manduca, C. A.; Ormand, C. J.; McDaris, J. R.

    2009-12-01

    Geoscience is a highly visual field, and effective use of visualizations can enhance student learning, appeal to students’ emotions and help them acquire skills for interpreting visual information. The On the Cutting Edge website, “Teaching Geoscience with Visualizations” presents information of interest to faculty who are teaching with visualizations, as well as those who are designing visualizations. The website contains best practices for effective visualizations, drawn from the educational literature and from experts in the field. For example, a case is made for careful selection of visualizations so that faculty can align the correct visualization with their teaching goals and audience level. Appropriate visualizations will contain the desired geoscience content without adding extraneous information that may distract or confuse students. Features such as labels, arrows and contextual information can help guide students through imagery and help to explain the relevant concepts. Because students learn by constructing their own mental image of processes, it is helpful to select visualizations that reflect the same type of mental picture that students should create. A host of recommended readings and presentations from the On the Cutting Edge visualization workshops can provide further grounding for the educational uses of visualizations. Several different collections of visualizations, datasets with visualizations and visualization tools are available on the website. Examples include animations of tsunamis, El Nino conditions, braided stream formation and mountain uplift. These collections are grouped by topic and range from simple animations to interactive models. A series of example activities that incorporate visualizations into classroom and laboratory activities illustrate various tactics for using these materials in different types of settings. Activities cover topics such as ocean circulation, land use changes, earthquake simulations and the use of Google Earth to explore geologic processes. These materials can be found at http://serc.carleton.edu/NAGTWorkshops/visualization. Faculty and developers of visualization tools are encouraged to submit teaching activities, references or visualizations to the collections.

  1. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.

  2. Visualizing Airborne and Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Bierwirth, Victoria A.

    2011-01-01

    Remote sensing is a process able to provide information about Earth to better understand Earth's processes and assist in monitoring Earth's resources. The Cloud Absorption Radiometer (CAR) is one remote sensing instrument dedicated to the cause of collecting data on anthropogenic influences on Earth as well as assisting scientists in understanding land-surface and atmospheric interactions. Landsat is a satellite program dedicated to collecting repetitive coverage of the continental Earth surfaces in seven regions of the electromagnetic spectrum. Combining these two aircraft and satellite remote sensing instruments will provide a detailed and comprehensive data collection able to provide influential information and improve predictions of changes in the future. This project acquired, interpreted, and created composite images from satellite data acquired from Landsat 4-5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper plus (ETM+). Landsat images were processed for areas covered by CAR during the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCT AS), Cloud and Land Surface Interaction Campaign (CLASIC), Intercontinental Chemical Transport Experiment-Phase B (INTEXB), and Southern African Regional Science Initiative (SAFARI) 2000 missions. The acquisition of Landsat data will provide supplemental information to assist in visualizing and interpreting airborne and satellite imagery.

  3. Multispectral images of flowers reveal the adaptive significance of using long-wavelength-sensitive receptors for edge detection in bees.

    PubMed

    Vasas, Vera; Hanley, Daniel; Kevan, Peter G; Chittka, Lars

    2017-04-01

    Many pollinating insects acquire their entire nutrition from visiting flowers, and they must therefore be efficient both at detecting flowers and at recognizing familiar rewarding flower types. A crucial first step in recognition is the identification of edges and the segmentation of the visual field into areas that belong together. Honeybees and bumblebees acquire visual information through three types of photoreceptors; however, they only use a single receptor type-the one sensitive to longer wavelengths-for edge detection and movement detection. Here, we show that these long-wavelength receptors (peak sensitivity at ~544 nm, i.e., green) provide the most consistent signals in response to natural objects. Using our multispectral image database of flowering plants, we found that long-wavelength receptor responses had, depending on the specific scenario, up to four times higher signal-to-noise ratios than the short- and medium-wavelength receptors. The reliability of the long-wavelength receptors emerges from an intricate interaction between flower coloration and the bee's visual system. This finding highlights the adaptive significance of bees using only long-wavelength receptors to locate flowers among leaves, before using information provided by all three receptors to distinguish the rewarding flower species through trichromatic color vision.

  4. Automated Identification and Characterization of Secondary & Tertiary gamma’ Precipitates in Nickel-Based Superalloys (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    and intensity information from the EFTEM images. The microstructural statistics obtained from the segmented γ’ precipitates agreed with those of the...is its ability to automate segmentation of precipitates in a reproducible manner for acquiring microstructural statistics that relate to both...were identified using a combination of visual inspection and intensity information from the EFTEM images. The microstructural statistics obtained

  5. Visual cortex activity predicts subjective experience after reading books with colored letters.

    PubMed

    Colizoli, Olympia; Murre, Jaap M J; Scholte, H Steven; van Es, Daniel M; Knapen, Tomas; Rouw, Romke

    2016-07-29

    One of the most astonishing properties of synesthesia is that the evoked concurrent experiences are perceptual. Is it possible to acquire similar effects after learning cross-modal associations that resemble synesthetic mappings? In this study, we examine whether brain activation in early visual areas can be directly related to letter-color associations acquired by training. Non-synesthetes read specially prepared books with colored letters for several weeks and were scanned using functional magnetic resonance imaging. If the acquired letter-color associations were visual in nature, then brain activation in visual cortex while viewing the trained black letters (compared to untrained black letters) should predict the strength of the associations, the quality of the color experience, or the vividness of visual mental imagery. Results showed that training-related activation of area V4 was correlated with differences in reported subjective color experience. Trainees who were classified as having stronger 'associator' types of color experiences also had more negative activation for trained compared to untrained achromatic letters in area V4. In contrast, the strength of the acquired associations (measured as the Stroop effect) was not reliably reflected in visual cortex activity. The reported vividness of visual mental imagery was related to veridical color activation in early visual cortex, but not to the acquired color associations. We show for the first time that subjective experience related to a synesthesia-training paradigm was reflected in visual brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Physical activity among older people with sight loss: a qualitative research study to inform policy and practice.

    PubMed

    Phoenix, C; Griffin, M; Smith, B

    2015-02-01

    To investigate the ways in which participation in physical activity is prevented or facilitated among older people with acquired sight loss later in life. Qualitative research. Interviews were conducted with 48 visually impaired adults age 60+ years, recruited from a range of settings including local sight loss organisations and via talking newspaper advertisements. Visual impairment was defined by self-report. Data was analysed using a thematic analysis. This research represents a first step toward the development of empirically based practical suggestions for decision-makers and health professionals in terms of supporting - when required - visually impaired older adults participation in physical activity. Six themes were identified that captured why physical activity was prevented or facilitated: disabling environments; organisational opportunities; transport; lack of information; confidence, fear and personal safety; and exercise as medicine. Recommendations for policy change need to be focused at the societal level. This includes developing more accessible and inclusive environments and providing meaningful information about physical activity to older adults with a visual impairment, and visual impairment in older age to physical activity providers. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  7. Acquired pit of the optic nerve: a risk factor for progression of glaucoma.

    PubMed

    Ugurlu, S; Weitzman, M; Nduaguba, C; Caprioli, J

    1998-04-01

    To examine acquired pit of the optic nerve as a risk factor for progression of glaucoma. In a retrospective longitudinal study, 25 open-angle glaucoma patients with acquired pit of the optic nerve were compared with a group of 24 open-angle glaucoma patients without acquired pit of the optic nerve. The patients were matched for age, mean intraocular pressure, baseline ratio of neuroretinal rim area to disk area, visual field damage, and duration of follow-up. Serial optic disk photographs and visual fields of both groups were evaluated by three independent observers for glaucomatous progression. Of 46 acquired pits of the optic nerve in 37 eyes of 25 patients, 36 pits were located inferiorly (76%) and 11 superiorly (24%; P < .001). Progression of optic disk damage occurred in 16 patients (64%) in the group with acquired pit and in three patients (12.5%) in the group without acquired pit (P < .001). Progression of visual field loss occurred in 14 patients (56%) in the group with acquired pit and in six (25%) in the group without pit (P=.04). Bilateral acquired pit of the optic nerve was present in 12 patients (48%). Disk hemorrhages were observed more frequently in the group with acquired pit (10 eyes, 40%) compared with the group without pit (two eyes, 8%; P=.02). Among patients with glaucoma, patients with acquired pit of the optic nerve represent a subgroup who are at increased risk for progressive optic disk damage and visual field loss.

  8. Real-Time Aerodynamic Flow and Data Visualization in an Interactive Virtual Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2005-01-01

    Significant advances have been made to non-intrusive flow field diagnostics in the past decade. Camera based techniques are now capable of determining physical qualities such as surface deformation, surface pressure and temperature, flow velocities, and molecular species concentration. In each case, extracting the pertinent information from the large volume of acquired data requires powerful and efficient data visualization tools. The additional requirement for real time visualization is fueled by an increased emphasis on minimizing test time in expensive facilities. This paper will address a capability titled LiveView3D, which is the first step in the development phase of an in depth, real time data visualization and analysis tool for use in aerospace testing facilities.

  9. Top-down contextual knowledge guides visual attention in infancy.

    PubMed

    Tummeltshammer, Kristen; Amso, Dima

    2017-10-26

    The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.

  10. Grief and Needs of Adults with Acquired Visual Impairments

    ERIC Educational Resources Information Center

    Murray, Shirley A.; McKay, Robert C.; Nieuwoudt, Johan M.

    2010-01-01

    This report aims to illuminate the complex phenomenon of grief and the needs experienced throughout the time course of their impairments by adults with acquired visual impairments. The study applied a phenomenological research strategy using 10 case studies of South African adults, visually impaired within and beyond six years. Qualitative…

  11. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  12. Noninvasive CT to Iso-C3D registration for improved intraoperative visualization in computer assisted orthopedic surgery

    NASA Astrophysics Data System (ADS)

    Rudolph, Tobias; Ebert, Lars; Kowal, Jens

    2006-03-01

    Supporting surgeons in performing minimally invasive surgeries can be considered as one of the major goals of computer assisted surgery. Excellent intraoperative visualization is a prerequisite to achieve this aim. The Siremobil Iso-C 3D has become a widely used imaging device, which, in combination with a navigation system, enables the surgeon to directly navigate within the acquired 3D image volume without any extra registration steps. However, the image quality is rather low compared to a CT scan and the volume size (approx. 12 cm 3) limits its application. A regularly used alternative in computer assisted orthopedic surgery is to use of a preoperatively acquired CT scan to visualize the operating field. But, the additional registration step, necessary in order to use CT stacks for navigation is quite invasive. Therefore the objective of this work is to develop a noninvasive registration technique. In this article a solution is being proposed that registers a preoperatively acquired CT scan to the intraoperatively acquired Iso-C 3D image volume, thereby registering the CT to the tracked anatomy. The procedure aligns both image volumes by maximizing the mutual information, an algorithm that has already been applied to similar registration problems and demonstrated good results. Furthermore the accuracy of such a registration method was investigated in a clinical setup, integrating a navigated Iso-C 3D in combination with an tracking system. Initial tests based on cadaveric animal bone resulted in an accuracy ranging from 0.63mm to 1.55mm mean error.

  13. Hadoop-based implementation of processing medical diagnostic records for visual patient system

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Shi, Liehang; Xie, Zhe; Zhang, Jianguo

    2018-03-01

    We have innovatively introduced Visual Patient (VP) concept and method visually to represent and index patient imaging diagnostic records (IDR) in last year SPIE Medical Imaging (SPIE MI 2017), which can enable a doctor to review a large amount of IDR of a patient in a limited appointed time slot. In this presentation, we presented a new approach to design data processing architecture of VP system (VPS) to acquire, process and store various kinds of IDR to build VP instance for each patient in hospital environment based on Hadoop distributed processing structure. We designed this system architecture called Medical Information Processing System (MIPS) with a combination of Hadoop batch processing architecture and Storm stream processing architecture. The MIPS implemented parallel processing of various kinds of clinical data with high efficiency, which come from disparate hospital information system such as PACS, RIS LIS and HIS.

  14. Solid object visualization of 3D ultrasound data

    NASA Astrophysics Data System (ADS)

    Nelson, Thomas R.; Bailey, Michael J.

    2000-04-01

    Visualization of volumetric medical data is challenging. Rapid-prototyping (RP) equipment producing solid object prototype models of computer generated structures is directly applicable to visualization of medical anatomic data. The purpose of this study was to develop methods for transferring 3D Ultrasound (3DUS) data to RP equipment for visualization of patient anatomy. 3DUS data were acquired using research and clinical scanning systems. Scaling information was preserved and the data were segmented using threshold and local operators to extract features of interest, converted from voxel raster coordinate format to a set of polygons representing an iso-surface and transferred to the RP machine to create a solid 3D object. Fabrication required 30 to 60 minutes depending on object size and complexity. After creation the model could be touched and viewed. A '3D visualization hardcopy device' has advantages for conveying spatial relations compared to visualization using computer display systems. The hardcopy model may be used for teaching or therapy planning. Objects may be produced at the exact dimension of the original object or scaled up (or down) to facilitate matching the viewers reference frame more optimally. RP models represent a useful means of communicating important information in a tangible fashion to patients and physicians.

  15. Dynamic visual attention: motion direction versus motion magnitude

    NASA Astrophysics Data System (ADS)

    Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.

    2008-02-01

    Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.

  16. Does touch inhibit visual imagery? A case study on acquired blindness.

    PubMed

    von Trott Zu Solz, Jana; Paolini, Marco; Silveira, Sarita

    2017-06-01

    In a single-case study of acquired blindness, differential brain activation patterns for visual imagery of familiar objects with and without tactile exploration as well as of tactilely explored unfamiliar objects were observed. Results provide new insight into retrieval of visual images from episodic memory and point toward a potential tactile inhibition of visual imagery. © 2017 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  17. VisualEyes: a modular software system for oculomotor experimentation.

    PubMed

    Guo, Yi; Kim, Eun H; Kim, Eun; Alvarez, Tara; Alvarez, Tara L

    2011-03-25

    Eye movement studies have provided a strong foundation forming an understanding of how the brain acquires visual information in both the normal and dysfunctional brain.(1) However, development of a platform to stimulate and store eye movements can require substantial programming, time and costs. Many systems do not offer the flexibility to program numerous stimuli for a variety of experimental needs. However, the VisualEyes System has a flexible architecture, allowing the operator to choose any background and foreground stimulus, program one or two screens for tandem or opposing eye movements and stimulate the left and right eye independently. This system can significantly reduce the programming development time needed to conduct an oculomotor study. The VisualEyes System will be discussed in three parts: 1) the oculomotor recording device to acquire eye movement responses, 2) the VisualEyes software written in LabView, to generate an array of stimuli and store responses as text files and 3) offline data analysis. Eye movements can be recorded by several types of instrumentation such as: a limbus tracking system, a sclera search coil, or a video image system. Typical eye movement stimuli such as saccadic steps, vergent ramps and vergent steps with the corresponding responses will be shown. In this video report, we demonstrate the flexibility of a system to create numerous visual stimuli and record eye movements that can be utilized by basic scientists and clinicians to study healthy as well as clinical populations.

  18. Integration of information and volume visualization for analysis of cell lineage and gene expression during embryogenesis

    NASA Astrophysics Data System (ADS)

    Cedilnik, Andrej; Baumes, Jeffrey; Ibanez, Luis; Megason, Sean; Wylie, Brian

    2008-01-01

    Dramatic technological advances in the field of genomics have made it possible to sequence the complete genomes of many different organisms. With this overwhelming amount of data at hand, biologists are now confronted with the challenge of understanding the function of the many different elements of the genome. One of the best places to start gaining insight on the mechanisms by which the genome controls an organism is the study of embryogenesis. There are multiple and inter-related layers of information that must be established in order to understand how the genome controls the formation of an organism. One is cell lineage which describes how patterns of cell division give rise to different parts of an organism. Another is gene expression which describes when and where different genes are turned on. Both of these data types can now be acquired using fluorescent laser-scanning (confocal or 2-photon) microscopy of embryos tagged with fluorescent proteins to generate 3D movies of developing embryos. However, analyzing the wealth of resulting images requires tools capable of interactively visualizing several different types of information as well as being scalable to terabytes of data. This paper describes how the combination of existing large data volume visualization and the new Titan information visualization framework of the Visualization Toolkit (VTK) can be applied to the problem of studying the cell lineage of an organism. In particular, by linking the visualization of spatial and temporal gene expression data with novel ways of visualizing cell lineage data, users can study how the genome regulates different aspects of embryonic development.

  19. Visual Sensing for Urban Flood Monitoring

    PubMed Central

    Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han

    2015-01-01

    With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201

  20. Double Dissociation of Conditioning and Declarative Knowledge Relative to the Amygdala and Hippocampus in Humans

    NASA Astrophysics Data System (ADS)

    Bechara, Antoine; Tranel, Daniel; Damasio, Hanna; Adolphs, Ralph; Rockland, Charles; Damasio, Antonio R.

    1995-08-01

    A patient with selective bilateral damage to the amygdala did not acquire conditioned autonomic responses to visual or auditory stimuli but did acquire the declarative facts about which visual or auditory stimuli were paired with the unconditioned stimulus. By contrast, a patient with selective bilateral damage to the hippocampus failed to acquire the facts but did acquire the conditioning. Finally, a patient with bilateral damage to both amygdala and hippocampal formation acquired neither the conditioning nor the facts. These findings demonstrate a double dissociation of conditioning and declarative knowledge relative to the human amygdala and hippocampus.

  1. Visual servoing in medical robotics: a survey. Part I: endoscopic and direct vision imaging - techniques and applications.

    PubMed

    Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V

    2014-09-01

    Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.

  2. The development of visual speech perception in Mandarin Chinese-speaking children.

    PubMed

    Chen, Liang; Lei, Jianghua

    2017-01-01

    The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials. These findings suggest that (1) visual speech perception in Chinese is a developmental process that is acquired over time and is still fine-tuned well into late adolescence; (2) factors other than cross-linguistic differences in phonological complexity and degrees of reliance on visual information are involved in development of visual speech perception.

  3. Soldier-worn augmented reality system for tactical icon visualization

    NASA Astrophysics Data System (ADS)

    Roberts, David; Menozzi, Alberico; Clipp, Brian; Russler, Patrick; Cook, James; Karl, Robert; Wenger, Eric; Church, William; Mauger, Jennifer; Volpe, Chris; Argenta, Chris; Wille, Mark; Snarski, Stephen; Sherrill, Todd; Lupo, Jasper; Hobson, Ross; Frahm, Jan-Michael; Heinly, Jared

    2012-06-01

    This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive 'heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier's view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina - Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°×30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-on-Target) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.

  4. Improvement of The Ability of Junior High School Students Thinking Through Visual Learning Assisted Geo gbra Tutorial

    NASA Astrophysics Data System (ADS)

    Elvi, M.; Nurjanah

    2017-02-01

    This research is distributed on the issue of the lack of visual thinking ability is a must-have basic ability of students in learning geometry. The purpose of this research is to investigate and elucide: 1) the enhancement of visual thinking ability of students to acquire learning assisted with geogebra tutorial learning: 2) the increase in visual thinking ability of students who obtained a model of learning assisted with geogebra and students who obtained a regular study of KAM (high, medium, and low). This research population is grade VII in Bandung Junior High School. The instruments used to collect data in this study consisted of instruments of the test and the observation sheet. The data obtained were analyzed using the test average difference i.e. Test-t and ANOVA Test one line to two lines. The results showed that: 1) the attainment and enhancement of visual thinking ability of students to acquire learning assisted geogebra tutorial better than students who acquire learning; 2) there may be differences of visual upgrade thinking students who acquire the learning model assisted with geogebra tutorial earn regular learning of KAM (high, medium and low).

  5. Image gathering and coding for digital restoration: Information efficiency and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Mccormick, Judith A.; Narayanswamy, Ramkumar

    1989-01-01

    Image gathering and coding are commonly treated as tasks separate from each other and from the digital processing used to restore and enhance the images. The goal is to develop a method that allows us to assess quantitatively the combined performance of image gathering and coding for the digital restoration of images with high visual quality. Digital restoration is often interactive because visual quality depends on perceptual rather than mathematical considerations, and these considerations vary with the target, the application, and the observer. The approach is based on the theoretical treatment of image gathering as a communication channel (J. Opt. Soc. Am. A2, 1644(1985);5,285(1988). Initial results suggest that the practical upper limit of the information contained in the acquired image data range typically from approximately 2 to 4 binary information units (bifs) per sample, depending on the design of the image-gathering system. The associated information efficiency of the transmitted data (i.e., the ratio of information over data) ranges typically from approximately 0.3 to 0.5 bif per bit without coding to approximately 0.5 to 0.9 bif per bit with lossless predictive compression and Huffman coding. The visual quality that can be attained with interactive image restoration improves perceptibly as the available information increases to approximately 3 bifs per sample. However, the perceptual improvements that can be attained with further increases in information are very subtle and depend on the target and the desired enhancement.

  6. Right Visual Field Advantage in Parafoveal Processing: Evidence from Eye-Fixation-Related Potentials

    ERIC Educational Resources Information Center

    Simola, Jaana; Holmqvist, Kenneth; Lindgren, Magnus

    2009-01-01

    Readers acquire information outside the current eye fixation. Previous research indicates that having only the fixated word available slows reading, but when the next word is visible, reading is almost as fast as when the whole line is seen. Parafoveal-on-foveal effects are interpreted to reflect that the characteristics of a parafoveal word can…

  7. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  8. Binocular stereo matching method based on structure tensor

    NASA Astrophysics Data System (ADS)

    Song, Xiaowei; Yang, Manyi; Fan, Yubo; Yang, Lei

    2016-10-01

    In a binocular visual system, to recover the three-dimensional information of the object, the most important step is to acquire matching points. Structure tensor is the vector representation of each point in its local neighborhood. Therefore, structure tensor performs well in region detection of local structure, and it is very suitable for detecting specific graphics such as pedestrians, cars and road signs in the image. In this paper, the structure tensor is combined with the luminance information to form the extended structure tensor. The directional derivatives of luminance in x and y directions are calculated, so that the local structure of the image is more prominent. Meanwhile, the Euclidean distance between the eigenvectors of key points is used as the similarity determination metric of key points in the two images. By matching, the coordinates of the matching points in the detected target are precisely acquired. In this paper, experiments were performed on the captured left and right images. After the binocular calibration, image matching was done to acquire the matching points, and then the target depth was calculated according to these matching points. By comparison, it is proved that the structure tensor can accurately acquire the matching points in binocular stereo matching.

  9. The neuropsychological rehabilitation of visual agnosia and Balint's syndrome.

    PubMed

    Heutink, Joost; Indorf, Dana L; Cordes, Christina

    2018-01-24

    Visual agnosia and Balint's syndrome are complex neurological disorders of the higher visual system that can have a remarkable impact on individuals' lives. Rehabilitation of these individuals is important to enable participation in everyday activities despite the impairment. However, the literature about the rehabilitation of these disorders is virtually silent. Therefore, the aim of this systematic review is to give an overview of available literature describing treatment approaches and their effectiveness with regard to these disorders. The search engines Psychinfo, Amed, and Medline were used, resulting in 22 articles meeting the criteria for inclusion. Only articles describing acquired disorders were considered. These articles revealed that there is some information available on the major subtypes of visual agnosia as well as on Balint's syndrome which practising clinicians can consult for guidance. With regard to the type of rehabilitation, compensatory strategies have proven to be beneficial in most of the cases. Restorative training on the other hand has produced mixed results. Concluding, although still scarce, a scientific foundation about the rehabilitation of visual agnosia and Balint's syndrome is evolving. The available approaches give valuable information that can be built upon in the future.

  10. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  11. Multichannel optical mapping: investigation of depth information

    NASA Astrophysics Data System (ADS)

    Sase, Ichiro; Eda, Hideo; Seiyama, Akitoshi; Tanabe, Hiroki C.; Takatsuki, Akira; Yanagida, Toshio

    2001-06-01

    Near infrared (NIR) light has become a powerful tool for non-invasive imaging of human brain activity. Many systems have been developed to capture the changes in regional brain blood flow and hemoglobin oxygenation, which occur in the human cortex in response to neural activity. We have developed a multi-channel reflectance imaging system, which can be used as a `mapping device' and also as a `multi-channel spectrophotometer'. In the present study, we visualized changes in the hemodynamics of the human occipital region in multiple ways. (1) Stimulating left and right primary visual cortex independently by showing sector shaped checkerboards sequentially over the contralateral visual field, resulted in corresponding changes in the hemodynamics observed by `mapping' measurement. (2) Simultaneous measurement of functional-MRI and NIR (changes in total hemoglobin) during visual stimulation showed good spatial and temporal correlation with each other. (3) Placing multiple channels densely over the occipital region demonstrated spatial patterns more precisely, and depth information was also acquired by placing each pair of illumination and detection fibers at various distances. These results indicate that optical method can provide data for 3D analysis of human brain functions.

  12. Localization Using Visual Odometry and a Single Downward-Pointing Camera

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.

    2012-01-01

    Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.

  13. Visualizing Cloud Properties and Satellite Imagery: A Tool for Visualization and Information Integration

    NASA Astrophysics Data System (ADS)

    Chee, T.; Nguyen, L.; Smith, W. L., Jr.; Spangenberg, D.; Palikonda, R.; Bedka, K. M.; Minnis, P.; Thieman, M. M.; Nordeen, M.

    2017-12-01

    Providing public access to research products including cloud macro and microphysical properties and satellite imagery are a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a web based visualization tool and API that allows end users to easily create customized cloud product and satellite imagery, ground site data and satellite ground track information that is generated dynamically. The tool has two uses, one to visualize the dynamically created imagery and the other to provide access to the dynamically generated imagery directly at a later time. Internally, we leverage our practical experience with large, scalable application practices to develop a system that has the largest potential for scalability as well as the ability to be deployed on the cloud to accommodate scalability issues. We build upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product information, satellite imagery, ground site data and satellite track information accessible and easily searchable. This tool is the culmination of our prior experience with dynamic imagery generation and provides a way to build a "mash-up" of dynamically generated imagery and related kinds of information that are visualized together to add value to disparate but related information. In support of NASA strategic goals, our group aims to make as much scientific knowledge, observations and products available to the citizen science, research and interested communities as well as for automated systems to acquire the same information for data mining or other analytic purposes. This tool and the underlying API's provide a valuable research tool to a wide audience both as a standalone research tool and also as an easily accessed data source that can easily be mined or used with existing tools.

  14. Electro-Optical Design for Efficient Visual Communication

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.; Jobson, Daniel J.; Rahman, Zia-Ur

    1995-01-01

    Visual communication, in the form of telephotography and television, for example, can be regarded as efficient only if the amount of information that it conveys about the scene to the observer approaches the maximum possible and the associated cost approaches the minimum possible. Elsewhere we have addressed the problem of assessing the end to end performance of visual communication systems in terms of their efficiency in this sense by integrating the critical limiting factors that constrain image gathering into classical communications theory. We use this approach to assess the electro-optical design of image gathering devices as a function of the f number and apodization of the objective lens and the aperture size and sampling geometry of the phot-detection mechanism. Results show that an image gathering device that is designed to optimize information capacity performs similarly to the human eye. For both, the performance approaches the maximum possible, in terms of the efficiency with which the acquired information can be transmitted as decorrelated data, and the fidelity, sharpness, and clearity with which fine detail can be restored.

  15. Supralinear and Supramodal Integration of Visual and Tactile Signals in Rats: Psychophysics and Neuronal Mechanisms.

    PubMed

    Nikbakht, Nader; Tafreshiha, Azadeh; Zoccolan, Davide; Diamond, Mathew E

    2018-02-07

    To better understand how object recognition can be triggered independently of the sensory channel through which information is acquired, we devised a task in which rats judged the orientation of a raised, black and white grating. They learned to recognize two categories of orientation: 0° ± 45° ("horizontal") and 90° ± 45° ("vertical"). Each trial required a visual (V), a tactile (T), or a visual-tactile (VT) discrimination; VT performance was better than that predicted by optimal linear combination of V and T signals, indicating synergy between sensory channels. We examined posterior parietal cortex (PPC) and uncovered key neuronal correlates of the behavioral findings: PPC carried both graded information about object orientation and categorical information about the rat's upcoming choice; single neurons exhibited identical responses under the three modality conditions. Finally, a linear classifier of neuronal population firing replicated the behavioral findings. Taken together, these findings suggest that PPC is involved in the supramodal processing of shape. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Electro-optical design for efficient visual communication

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.; Jobson, Daniel J.; Rahman, Zia-ur

    1995-03-01

    Visual communication, in the form of telephotography and television, for example, can be regarded as efficient only if the amount of information that it conveys about the scene to the observer approaches the maximum possible and the associated cost approaches the minimum possible. Elsewhere we have addressed the problem of assessing the end-to-end performance of visual communication systems in terms of their efficiency in this sense by integrating the critical limiting factors that constrain image gathering into classical communication theory. We use this approach to assess the electro-optical design of image-gathering devices as a function of the f number and apodization of the objective lens and the aperture size and sampling geometry of the photodetection mechanism. Results show that an image-gathering device that is designed to optimize information capacity performs similarly to the human eye. For both, the performance approaches the maximum possible, in terms of the efficiency with which the acquired information can be transmitted as decorrelated data, and the fidelity, sharpness, and clarity with which fine detail can be restored.

  17. Autonomous Visual Navigation of an Indoor Environment Using a Parsimonious, Insect Inspired Familiarity Algorithm

    PubMed Central

    Brayfield, Brad P.

    2016-01-01

    The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects’ brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path’s end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery. PMID:27119720

  18. a Three-Dimensional Simulation and Visualization System for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Qu, Y.; Cui, T.

    2017-08-01

    Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with manned aircraft, UAVs are more cost-effective and responsive. However, UAVs are usually more sensitive to wind condition, which greatly influences their positions and orientations. The flight height of a UAV is relative low, and the relief of the terrain may result in serious occlusions. Moreover, the observations acquired by the Position and Orientation System (POS) are usually less accurate than those acquired in manned aerial photogrammetry. All of these factors bring in uncertainties to UAV photogrammetry. To investigate these uncertainties, a three-dimensional simulation and visualization system has been developed. The system is demonstrated with flight plan evaluation, image matching, POS-supported direct georeferencing, and ortho-mosaicing. Experimental results show that the presented system is effective for flight plan evaluation. The generated image pairs are accurate and false matches can be effectively filtered. The presented system dynamically visualizes the results of direct georeferencing in three-dimensions, which is informative and effective for real-time target tracking and positioning. The dynamically generated orthomosaic can be used in emergency applications. The presented system has also been used for teaching theories and applications of UAV photogrammetry.

  19. Distribution of Potential Hydrothermally Altered Rocks in Central Colorado Derived From Landsat Thematic Mapper Data: A Geographic Information System Data Set

    USGS Publications Warehouse

    Knepper, Daniel H.

    2010-01-01

    As part of the Central Colorado Mineral Resource Assessment Project, the digital image data for four Landsat Thematic Mapper scenes covering central Colorado between Wyoming and New Mexico were acquired and band ratios were calculated after masking pixels dominated by vegetation, snow, and terrain shadows. Ratio values were visually enhanced by contrast stretching, revealing only those areas with strong responses (high ratio values). A color-ratio composite mosaic was prepared for the four scenes so that the distribution of potentially hydrothermally altered rocks could be visually evaluated. To provide a more useful input to a Geographic Information System-based mineral resource assessment, the information contained in the color-ratio composite raster image mosaic was converted to vector-based polygons after thresholding to isolate the strongest ratio responses and spatial filtering to reduce vector complexity and isolate the largest occurrences of potentially hydrothermally altered rocks.

  20. Eye Choice for Acquisition of Targets in Alternating Strabismus

    PubMed Central

    Economides, John R.; Adams, Daniel L.

    2014-01-01

    In strabismus, potentially either eye can inform the brain about the location of a target so that an accurate saccade can be made. Sixteen human subjects with alternating exotropia were tested dichoptically while viewing stimuli on a tangent screen. Each trial began with a fixation cross visible to only one eye. After the subject fixated the cross, a peripheral target visible to only one eye flashed briefly. The subject's task was to look at it. As a rule, the eye to which the target was presented was the eye that acquired the target. However, when stimuli were presented in the far nasal visual field, subjects occasionally performed a “crossover” saccade by placing the other eye on the target. This strategy avoided the need to make a large adducting saccade. In such cases, information about target location was obtained by one eye and used to program a saccade for the other eye, with a corresponding latency increase. In 10/16 subjects, targets were presented on some trials to both eyes. Binocular sensory maps were also compiled to delineate the portions of the visual scene perceived with each eye. These maps were compared with subjects' pattern of eye choice for target acquisition. There was a correspondence between suppression scotoma maps and the eye used to acquire peripheral targets. In other words, targets were fixated by the eye used to perceive them. These studies reveal how patients with alternating strabismus, despite eye misalignment, manage to localize and capture visual targets in their environment. PMID:25355212

  1. Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types

    PubMed Central

    Chiou, Shiau-Chuen; Chang, Erik Chihhung

    2016-01-01

    Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential “guidance effect” between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination. PMID:26895286

  2. Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types.

    PubMed

    Chiou, Shiau-Chuen; Chang, Erik Chihhung

    2016-01-01

    Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential "guidance effect" between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination.

  3. Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model

    PubMed Central

    Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2017-01-01

    Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation. PMID:28248996

  4. Problem solving in great apes (Pan paniscus, Pan troglodytes, Gorilla gorilla, and Pongo abelii): the effect of visual feedback.

    PubMed

    Völter, Christoph J; Call, Josep

    2012-09-01

    What kind of information animals use when solving problems is a controversial topic. Previous research suggests that, in some situations, great apes prefer to use causally relevant cues over arbitrary ones. To further examine to what extent great apes are able to use information about causal relations, we presented three different puzzle box problems to the four nonhuman great ape species. Of primary interest here was a comparison between one group of apes that received visual access to the functional mechanisms of the puzzle boxes and one group that did not. Apes' performance in the first two, less complex puzzle boxes revealed that they are able to solve such problems by means of trial-and-error learning, requiring no information about the causal structure of the problem. However, visual inspection of the functional mechanisms of the puzzle boxes reduced the amount of time needed to solve the problems. In the case of the most complex problem, which required the use of a crank, visual feedback about what happened when the handle of the crank was turned was necessary for the apes to solve the task. Once the solution was acquired, however, visual feedback was no longer required. We conclude that visual feedback about the consequences of their actions helps great apes to solve complex problems. As the crank task matches the basic requirements of vertical string pulling in birds, the present results are discussed in light of recent findings with corvids.

  5. Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model.

    PubMed

    Li, Min; Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2017-01-01

    Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation.

  6. Content-based Music Search and Recommendation System

    NASA Astrophysics Data System (ADS)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  7. Simultaneous Visual Discrimination in Asian Elephants

    ERIC Educational Resources Information Center

    Nissani, Moti; Hoefler-Nissani, Donna; Lay, U. Tin; Htun, U. Wan

    2005-01-01

    Two experiments explored the behavior of 20 Asian elephants ("Elephas aximus") in simultaneous visual discrimination tasks. In Experiment 1, 7 Burmese logging elephants acquired a white+/black- discrimination, reaching criterion in a mean of 2.6 sessions and 117 discrete trials, whereas 4 elephants acquired a black+/white- discrimination in 5.3…

  8. Are Current Insulin Pumps Accessible to Blind and Visually Impaired People?

    PubMed Central

    Burton, Darren M.; Uslan, Mark M.; Blubaugh, Morgan V.; Clements, Charles W.

    2009-01-01

    Background In 2004, Uslan and colleagues determined that insulin pumps (IPs) on the market were largely inaccessible to blind and visually impaired persons. The objective of this study is to determine if accessibility status changed in the ensuing 4 years. Methods Five IPs on the market in 2008 were acquired and analyzed for key accessibility traits such as speech and other audio output, tactual nature of control buttons, and the quality of visual displays. It was also determined whether or not a blind or visually impaired person could independently complete tasks such as programming the IP for insulin delivery, replacing batteries, and reading manuals and other documentation. Results It was found that IPs have not improved in accessibility since 2004. None have speech output, and with the exception of the Animas IR 2020, no significantly improved visual display characteristics were found. Documentation is still not completely accessible. Conclusion Insulin pumps are relatively complex devices, with serious health consequences resulting from improper use. For IPs to be used safely and independently by blind and visually impaired patients, they must include voice output to communicate all the information presented on their display screens. Enhancing display contrast and the size of the displayed information would also improve accessibility for visually impaired users. The IPs must also come with accessible user documentation in alternate formats. PMID:20144301

  9. Are current insulin pumps accessible to blind and visually impaired people?

    PubMed

    Burton, Darren M; Uslan, Mark M; Blubaugh, Morgan V; Clements, Charles W

    2009-05-01

    In 2004, Uslan and colleagues determined that insulin pumps (IPs) on the market were largely inaccessible to blind and visually impaired persons. The objective of this study is to determine if accessibility status changed in the ensuing 4 years. Five IPs on the market in 2008 were acquired and analyzed for key accessibility traits such as speech and other audio output, tactual nature of control buttons, and the quality of visual displays. It was also determined whether or not a blind or visually impaired person could independently complete tasks such as programming the IP for insulin delivery, replacing batteries, and reading manuals and other documentation. It was found that IPs have not improved in accessibility since 2004. None have speech output, and with the exception of the Animas IR 2020, no significantly improved visual display characteristics were found. Documentation is still not completely accessible. Insulin pumps are relatively complex devices, with serious health consequences resulting from improper use. For IPs to be used safely and independently by blind and visually impaired patients, they must include voice output to communicate all the information presented on their display screens. Enhancing display contrast and the size of the displayed information would also improve accessibility for visually impaired users. The IPs must also come with accessible user documentation in alternate formats. 2009 Diabetes Technology Society.

  10. Prediction of shot success for basketball free throws: visual search strategy.

    PubMed

    Uchida, Yusuke; Mizuguchi, Nobuaki; Honda, Masaaki; Kanosue, Kazuyuki

    2014-01-01

    In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed.

  11. Bayesian networks and information theory for audio-visual perception modeling.

    PubMed

    Besson, Patricia; Richiardi, Jonas; Bourdin, Christophe; Bringoux, Lionel; Mestre, Daniel R; Vercher, Jean-Louis

    2010-09-01

    Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.

  12. Correlated Imaging – A Grand Challenge in Chemical Analysis

    PubMed Central

    Masyuko, Rachel; Lanni, Eric; Sweedler, Jonathan V.; Bohn, Paul W.

    2013-01-01

    Correlated chemical imaging is an emerging strategy for acquisition of images by combining information from multiplexed measurement platforms to track, visualize, and interpret in situ changes in the structure, organization, and activities of interesting chemical systems, frequently spanning multiple decades in space and time. Acquiring and correlating information from complementary imaging experiments has the potential to expose complex chemical behavior in ways that are simply not available from single methods applied in isolation, thereby greatly amplifying the information gathering power of imaging experiments. However, in order to correlate image information across platforms, a number of issues must be addressed. First, signals are obtained from disparate experiments with fundamentally different figures of merit, including pixel size, spatial resolution, dynamic range, and acquisition rates. In addition, images are often acquired on different instruments in different locations, so the sample must be registered spatially so that the same area of the sample landscape is addressed. The signals acquired must be correlated in both spatial and temporal domains, and the resulting information has to be presented in a way that is readily understood. These requirements pose special challenges for image cross-correlation that go well beyond those posed in single technique imaging approaches. The special opportunities and challenges that attend correlated imaging are explored by specific reference to correlated mass spectrometric and Raman imaging, a topic of substantial and growing interest. PMID:23431559

  13. Information theoretic analysis of canny edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2011-06-01

    In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.

  14. Live Interrogation and Visualization of Earth Systems (LIVES)

    NASA Astrophysics Data System (ADS)

    Nunn, J. A.; Anderson, L. C.

    2007-12-01

    Twenty tablet PCs and associated peripherals acquired through a HP Technology for Teaching grant are being used to redesign two freshman laboratory courses as well as a sophomore geobiology course in Geology and Geophysics at Louisiana State University. The two introductory laboratories serve approximately 750 students per academic year including both majors and non-majors; the geobiology course enrolls about 35 students/year and is required for majors in the department's geology concentration. Limited enrollments and 3 hour labs make it possible to incorporate hands-on visualization, animation, GIS, manipulation of data and images, and access to geological data available online. Goals of the course redesigns include: enhancing visualization of earth materials, physical/chemical/biological processes, and biosphere/geosphere history; strengthening student's ability to acquire, manage, and interpret multifaceted geological information; fostering critical thinking, the scientific method, and earth-system science/perspective in ancient and modern environments (such as coastal erosion and restoration in Louisiana or the Snowball Earth hypothesis); improving student communication skills; and increasing the quantity, quality, and diversity of students pursuing Earth Science careers. IT resources available in the laboratory provide students with sophisticated visualization tools, allowing them to switch between 2-D and 3-D reconstructions more seamlessly, and enabling them to manipulate larger integrated data- sets, thus permitting more time for critical thinking and hypothesis testing. IT resources also enable faculty and students to simultaneously work with simulation software to animate earth processes such as plate motions or groundwater flow and immediately test hypothesis formulated in the data analysis. Finally, tablet PCs make it possible for data gathering and analysis outside a formal classroom. As a result, students will achieve fluency in using visualization and technology for informal and formal scientific communication. The equipment and exercises developed also will be used in additional upper level undergraduate classes and two outreach programs: NSF funded Geoscience Alliance for Enhanced Minority Participation and Shell Foundation funded Shell Undergraduate Recruiting and Geoscience Education.

  15. ICT integration in mathematics initial teacher training and its impact on visualization: the case of GeoGebra

    NASA Astrophysics Data System (ADS)

    Dockendorff, Monika; Solar, Horacio

    2018-01-01

    This case study investigates the impact of the integration of information and communications technology (ICT) in mathematics visualization skills and initial teacher education programmes. It reports on the influence GeoGebra dynamic software use has on promoting mathematical learning at secondary school and on its impact on teachers' conceptions about teaching and learning mathematics. This paper describes how GeoGebra-based dynamic applets - designed and used in an exploratory manner - promote mathematical processes such as conjectures. It also refers to the changes prospective teachers experience regarding the relevance visual dynamic representations acquire in teaching mathematics. This study observes a shift in school routines when incorporating technology into the mathematics classroom. Visualization appears as a basic competence associated to key mathematical processes. Implications of an early integration of ICT in mathematics initial teacher training and its impact on developing technological pedagogical content knowledge (TPCK) are drawn.

  16. Some Tests of Response Membership in Acquired Equivalence Classes

    ERIC Educational Resources Information Center

    Urcuioli, Peter J.; Lionello-DeNolf, Karen; Michalek, Sarah; Vasconcelos, Marco

    2006-01-01

    Pigeons were trained on many-to-one matching in which pairs of samples, each consisting of a visual stimulus and a distinctive pattern of center-key responding, occasioned the same reinforced comparison choice. Acquired equivalence between the visual and response samples then was evaluated by reinforcing new comparison choices to one set of…

  17. The Tölz Temporal Topography Study: mapping the visual field across the life span. Part I: the topography of light detection and temporal-information processing.

    PubMed

    Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-08-01

    Temporal performance parameters vary across the visual field. Their topographical distributions relative to each other and relative to basic visual performance measures and their relative change over the life span are unknown. Our goal was to characterize the topography and age-related change of temporal performance. We acquired visual field maps in 95 healthy participants (age: 10-90 years): perimetric thresholds, double-pulse resolution (DPR), reaction times (RTs), and letter contrast thresholds. DPR and perimetric thresholds increased with eccentricity and age; the periphery showed a more pronounced age-related increase than the center. RT increased only slightly and uniformly with eccentricity. It remained almost constant up to the age of 60, a marked change occurring only above 80. Overall, age was a poor predictor of functionality. Performance decline could be explained only in part by the aging of the retina and optic media. In Part II, we therefore examine higher visual and cognitive functions.

  18. Use of the internet as a resource for consumer health information: results of the second osteopathic survey of health care in America (OSTEOSURV-II).

    PubMed

    Licciardone, J C; Smith-Barbaro, P; Coleridge, S T

    2001-01-01

    The Internet offers consumers unparalleled opportunities to acquire health information. The emergence of the Internet, rather than more-traditional sources, for obtaining health information is worthy of ongoing surveillance, including identification of the factors associated with using the Internet for this purpose. To measure the prevalence of Internet use as a mechanism for obtaining health information in the United States; to compare such Internet use with newspapers or magazines, radio, and television; and to identify sociodemographic factors associated with using the Internet for acquiring health information. Data were acquired from the Second Osteopathic Survey of Health Care in America (OSTEOSURV-II), a national telephone survey using random-digit dialing within the United States during 2000. The target population consisted of adult, noninstitutionalized, household members. As part of the survey, data were collected on: facility with the Internet, sources of health information, and sociodemographic characteristics. Multivariate analysis was used to identify factors associated with acquiring health information on the Internet. A total of 499 (64% response rate) respondents participated in the survey. With the exception of an overrepresentation of women (66%), respondents were generally similar to national referents. Fifty percent of respondents either strongly agreed or agreed that they felt comfortable using the Internet as a health information resource. The prevalence rates of using the health information sources were: newspapers or magazines, 69%; radio, 30%; television, 56%; and the Internet, 32%. After adjusting for potential confounders, older respondents were more likely than younger respondents to use newspapers or magazines and television to acquire health information, but less likely to use the Internet. Higher education was associated with greater use of newspapers or magazines and the Internet as health information sources. Internet use was lower in rural than urban or suburban areas. The Internet has already surpassed radio as a source of health information but still lags substantially behind print media and television. Significant barriers to acquiring health information on the Internet remain among persons 60 years of age or older, those with 12 or fewer years of education, and those residing in rural areas. Stronger efforts are needed to ensure access to and facility with the Internet among all segments of the population. This includes user-friendly access for older persons with visual or other functional impairments, providing low-literacy Web sites, and expanding Internet infrastructure to reach all areas of the United States.

  19. Geometric Analysis, Visualization, and Conceptualization of 3D Image Data

    Science.gov Websites

    collection of geometric primitives (points, lines, polygons, etc.) that accurately represent the shape of the different color. The masks mentioned above are human supplied hints as to where to draw these contour lines ) Acquire information about the inside of an object, and generate a 3D image data set (2) Define the regions

  20. Eye-tracking novice and expert geologist groups in the field and laboratory

    NASA Astrophysics Data System (ADS)

    Cottrell, R. D.; Evans, K. M.; Jacobs, R. A.; May, B. B.; Pelz, J. B.; Rosen, M. R.; Tarduno, J. A.; Voronov, J.

    2010-12-01

    We are using an Active Vision approach to learn how novices and expert geologists acquire visual information in the field. The Active Vision approach emphasizes that visual perception is an active process wherein new information is acquired about a particular environment through exploratory eye movements. Eye movements are not only influenced by physical stimuli, but are also strongly influenced by high-level perceptual and cognitive processes. Eye-tracking data were collected on ten novices (undergraduate geology students) and 3 experts during a 10-day field trip across California focused on neotectonics. In addition, high-resolution panoramic images were captured at each key locality for use in a semi-immersive laboratory environment. Examples of each data type will be presented. The number of observers will be increased in subsequent field trips, but expert/novice differences are already apparent in the first set of individual eye-tracking records, including gaze time, gaze pattern and object recognition. We will review efforts to quantify these patterns, and development of semi-immersive environments to display geologic scenes. The research is a collaborative effort between Earth scientists, Cognitive scientists and Imaging scientists at the University of Rochester and the Rochester Institute of Technology and with funding from the National Science Foundation.

  1. Landmark memories are more robust when acquired at the nest site than en route: experiments in desert ants.

    PubMed

    Bisch-Knaden, Sonja; Wehner, Rüdiger

    2003-03-01

    Foraging desert ants, Cataglyphis fortis, encounter different sequences of visual landmarks while navigating by path integration. This paper explores the question whether the storage of landmark information depends on the context in which the landmarks are learned during an ant's foraging journey. Two experimental set-ups were designed in which the ants experienced an artificial landmark panorama that was placed either around the nest entrance (nest marks) or along the vector route leading straight towards the feeder (route marks). The two training paradigms resulted in pronounced differences in the storage characteristics of the acquired landmark information: memory traces of nest marks were much more robust against extinction and/or suppression than those of route marks. In functional terms, this result is in accord with the observation that desert ants encounter new route marks during every foraging run but always pass the same landmarks when approaching the nest entrance.

  2. 3-D Flow Visualization with a Light-field Camera

    NASA Astrophysics Data System (ADS)

    Thurow, B.

    2012-12-01

    Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.

  3. Classification by Using Multispectral Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  4. Reward processing in the value-driven attention network: reward signals tracking cue identity and location.

    PubMed

    Anderson, Brian A

    2017-03-01

    Through associative reward learning, arbitrary cues acquire the ability to automatically capture visual attention. Previous studies have examined the neural correlates of value-driven attentional orienting, revealing elevated activity within a network of brain regions encompassing the visual corticostriatal loop [caudate tail, lateral occipital complex (LOC) and early visual cortex] and intraparietal sulcus (IPS). Such attentional priority signals raise a broader question concerning how visual signals are combined with reward signals during learning to create a representation that is sensitive to the confluence of the two. This study examines reward signals during the cued reward training phase commonly used to generate value-driven attentional biases. High, compared with low, reward feedback preferentially activated the value-driven attention network, in addition to regions typically implicated in reward processing. Further examination of these reward signals within the visual system revealed information about the identity of the preceding cue in the caudate tail and LOC, and information about the location of the preceding cue in IPS, while early visual cortex represented both location and identity. The results reveal teaching signals within the value-driven attention network during associative reward learning, and further suggest functional specialization within different regions of this network during the acquisition of an integrated representation of stimulus value. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  5. A further look at postview effects in reading: An eye-movements study of influences from the left of fixation.

    PubMed

    Jordan, Timothy R; McGowan, Victoria A; Kurtev, Stoyan; Paterson, Kevin B

    2016-02-01

    When reading from left to right, useful information acquired during each fixational pause is widely assumed to extend 14 to 15 characters to the right of fixation but just 3 to 4 characters to the left, and certainly no further than the beginning of the fixated word. However, this leftward extent is strikingly small and seems inconsistent with other aspects of reading performance and with the general horizontal symmetry of visual input. Accordingly, 2 experiments were conducted to examine the influence of text located to the left of fixation during each fixational pause using an eye-tracking paradigm in which invisible boundaries were created in sentence displays. Each boundary corresponded to the leftmost edge of each word so that, as each sentence was read, the normal letter content of text to the left of each fixated word was corrupted by letter replacements that were either visually similar or visually dissimilar to the originals. The proximity of corrupted text to the left of fixation was maintained at 1, 2, 3, or 4 words from the left boundary of each fixated word. In both experiments, relative to completely normal text, reading performance was impaired when each type of letter replacement was up to 2 words to the left of fixated words but letter replacements further from fixation produced no impairment. These findings suggest that key aspects of reading are influenced by information acquired during each fixational pause from much further leftward than is usually assumed. Some of the implications of these findings for reading are discussed. (c) 2016 APA, all rights reserved).

  6. Short Term Reproducibility of a High Contrast 3-D Isotropic Optic Nerve Imaging Sequence in Healthy Controls.

    PubMed

    Harrigan, Robert L; Smith, Alex K; Mawn, Louise A; Smith, Seth A; Landman, Bennett A

    2016-02-27

    The optic nerve (ON) plays a crucial role in human vision transporting all visual information from the retina to the brain for higher order processing. There are many diseases that affect the ON structure such as optic neuritis, anterior ischemic optic neuropathy and multiple sclerosis. Because the ON is the sole pathway for visual information from the retina to areas of higher level processing, measures of ON damage have been shown to correlate well with visual deficits. Increased intracranial pressure has been shown to correlate with the size of the cerebrospinal fluid (CSF) surrounding the ON. These measures are generally taken at an arbitrary point along the nerve and do not account for changes along the length of the ON. We propose a high contrast and high-resolution 3-D acquired isotropic imaging sequence optimized for ON imaging. We have acquired scan-rescan data using the optimized sequence and a current standard of care protocol for 10 subjects. We show that this sequence has superior contrast-to-noise ratio to the current standard of care while achieving a factor of 11 higher resolution. We apply a previously published automatic pipeline to segment the ON and CSF sheath and measure the size of each individually. We show that these measures of ON size have lower short-term reproducibility than the population variance and the variability along the length of the nerve. We find that the proposed imaging protocol is (1) useful in detecting population differences and local changes and (2) a promising tool for investigating biomarkers related to structural changes of the ON.

  7. Short term reproducibility of a high contrast 3-D isotropic optic nerve imaging sequence in healthy controls

    NASA Astrophysics Data System (ADS)

    Harrigan, Robert L.; Smith, Alex K.; Mawn, Louise A.; Smith, Seth A.; Landman, Bennett A.

    2016-03-01

    The optic nerve (ON) plays a crucial role in human vision transporting all visual information from the retina to the brain for higher order processing. There are many diseases that affect the ON structure such as optic neuritis, anterior ischemic optic neuropathy and multiple sclerosis. Because the ON is the sole pathway for visual information from the retina to areas of higher level processing, measures of ON damage have been shown to correlate well with visual deficits. Increased intracranial pressure has been shown to correlate with the size of the cerebrospinal fluid (CSF) surrounding the ON. These measures are generally taken at an arbitrary point along the nerve and do not account for changes along the length of the ON. We propose a high contrast and high-resolution 3-D acquired isotropic imaging sequence optimized for ON imaging. We have acquired scan-rescan data using the optimized sequence and a current standard of care protocol for 10 subjects. We show that this sequence has superior contrast-to-noise ratio to the current standard of care while achieving a factor of 11 higher resolution. We apply a previously published automatic pipeline to segment the ON and CSF sheath and measure the size of each individually. We show that these measures of ON size have lower short- term reproducibility than the population variance and the variability along the length of the nerve. We find that the proposed imaging protocol is (1) useful in detecting population differences and local changes and (2) a promising tool for investigating biomarkers related to structural changes of the ON.

  8. Electro-optical design for efficient visual communication

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.; Jobson, Daniel J.; Rahman, Zia-ur

    1994-06-01

    Visual communication can be regarded as efficient only if the amount of information that it conveys from the scene to the observer approaches the maximum possible and the associated cost approaches the minimum possible. To deal with this problem, Fales and Huck have integrated the critical limiting factors that constrain image gathering into classical concepts of communication theory. This paper uses this approach to assess the electro-optical design of the image gathering device. Design variables include the f-number and apodization of the objective lens, the aperture size and sampling geometry of the photodetection mechanism, and lateral inhibition and nonlinear radiance-to-signal conversion akin to the retinal processing in the human eye. It is an agreeable consequence of this approach that the image gathering device that is designed along the guidelines developed from communication theory behaves very much like the human eye. The performance approaches the maximum possible in terms of the information content of the acquired data, and thereby, the fidelity, sharpness and clarity with which fine detail can be restored, the efficiency with which the visual information can be transmitted in the form of decorrelated data, and the robustness of these two attributes to the temporal and spatial variations in scene illumination.

  9. Technical parameters for specifying imagery requirements

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.; Dunnette, Sheri J.

    1994-01-01

    Providing visual information acquired from remote events to various operators, researchers, and practitioners has become progressively more important as the application of special skills in alien or hazardous situations increases. To provide an understanding of the technical parameters required to specify imagery, we have identified, defined, and discussed seven salient characteristics of images: spatial resolution, linearity, luminance resolution, spectral discrimination, temporal discrimination, edge definition, and signal-to-noise ratio. We then describe a generalizing imaging system and identified how various parts of the system affect the image data. To emphasize the different applications of imagery, we have constrasted the common television system with the significant parameters of a televisual imaging system for technical applications. Finally, we have established a method by which the required visual information can be specified by describing certain technical parameters which are directly related to the information content of the imagery. This method requires the user to complete a form listing all pertinent data requirements for the imagery.

  10. Visualization of flow by vector analysis of multidirectional cine MR velocity mapping.

    PubMed

    Mohiaddin, R H; Yang, G Z; Kilner, P J

    1994-01-01

    We describe a noninvasive method for visualization of flow and demonstrate its application in a flow phantom and in the great vessels of healthy volunteers and patients with aortic and pulmonary arterial disease. The technique uses multidirectional MR velocity mapping acquired in selected planes. Maps of orthogonal velocity components were then processed into a graphic form immediately recognizable as flow. Cine MR velocity maps of orthogonal velocity components in selected planes were acquired in a flow phantom, 10 healthy volunteers, and 13 patients with dilated great vessels. Velocities were presented by multiple computer-generated streaks whose orientation, length, and movement corresponded to velocity vectors in the chosen plane. The velocity vector maps allowed visualization of complex patterns of primary and secondary flow in the thoracic aorta and pulmonary arteries. The technique revealed coherent, helical forward blood movements in the normal thoracic aorta during midsystole and a reverse flow during early diastole. Abnormal flow patterns with secondary vortices were seen in patients with dilated arteries. The potential of MR velocity vector mapping for in vitro and in vivo visualization of flow patterns is demonstrated. Although this study was limited to two-directional flow in a single anatomical plane, the method provides information that might advance our understanding of the human vascular system in health and disease. Further developments to reduce the acquisition time and the handling and presenting of three-directional velocity data are required to enhance the capability of this method.

  11. MRI segmentation by active contours model, 3D reconstruction, and visualization

    NASA Astrophysics Data System (ADS)

    Lopez-Hernandez, Juan M.; Velasquez-Aguilar, J. Guadalupe

    2005-02-01

    The advances in 3D data modelling methods are becoming increasingly popular in the areas of biology, chemistry and medical applications. The Nuclear Magnetic Resonance Imaging (NMRI) technique has progressed at a spectacular rate over the past few years, its uses have been spread over many applications throughout the body in both anatomical and functional investigations. In this paper we present the application of Zernike polynomials for 3D mesh model of the head using the contour acquired of cross-sectional slices by active contour model extraction and we propose the visualization with OpenGL 3D Graphics of the 2D-3D (slice-surface) information for the diagnostic aid in medical applications.

  12. Sequential vs simultaneous encoding of spatial information: a comparison between the blind and the sighted.

    PubMed

    Ruotolo, Francesco; Ruggiero, Gennaro; Vinciguerra, Michela; Iachini, Tina

    2012-02-01

    The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. A Collaborative Decision Environment for UAV Operations

    NASA Technical Reports Server (NTRS)

    D'Ortenzio, Matthew V.; Enomoto, Francis Y.; Johan, Sandra L.

    2005-01-01

    NASA is developing Intelligent Mission Management (IMM) technology for science missions employing long endurance unmanned aerial vehicles (UAV's). The IMM groundbased component is the Collaborative Decision Environment (CDE), a ground system that provides the Mission/Science team with situational awareness, collaboration, and decisionmaking tools. The CDE is used for pre-flight planning, mission monitoring, and visualization of acquired data. It integrates external data products used for planning and executing a mission, such as weather, satellite data products, and topographic maps by leveraging established and emerging Open Geospatial Consortium (OGC) standards to acquire external data products via the Internet, and an industry standard geographic information system (GIs) toolkit for visualization As a Science/Mission team may be geographically dispersed, the CDE is capable of providing access to remote users across wide area networks using Web Services technology. A prototype CDE is being developed for an instrument checkout flight on a manned aircraft in the fall of 2005, in preparation for a full deployment in support of the US Forest Service and NASA Ames Western States Fire Mission in 2006.

  14. Formation of visual memories controlled by gamma power phase-locked to alpha oscillations.

    PubMed

    Park, Hyojin; Lee, Dong Soo; Kang, Eunjoo; Kang, Hyejin; Hahm, Jarang; Kim, June Sic; Chung, Chun Kee; Jiang, Haiteng; Gross, Joachim; Jensen, Ole

    2016-06-16

    Neuronal oscillations provide a window for understanding the brain dynamics that organize the flow of information from sensory to memory areas. While it has been suggested that gamma power reflects feedforward processing and alpha oscillations feedback control, it remains unknown how these oscillations dynamically interact. Magnetoencephalography (MEG) data was acquired from healthy subjects who were cued to either remember or not remember presented pictures. Our analysis revealed that in anticipation of a picture to be remembered, alpha power decreased while the cross-frequency coupling between gamma power and alpha phase increased. A measure of directionality between alpha phase and gamma power predicted individual ability to encode memory: stronger control of alpha phase over gamma power was associated with better memory. These findings demonstrate that encoding of visual information is reflected by a state determined by the interaction between alpha and gamma activity.

  15. Formation of visual memories controlled by gamma power phase-locked to alpha oscillations

    PubMed Central

    Park, Hyojin; Lee, Dong Soo; Kang, Eunjoo; Kang, Hyejin; Hahm, Jarang; Kim, June Sic; Chung, Chun Kee; Jiang, Haiteng; Gross, Joachim; Jensen, Ole

    2016-01-01

    Neuronal oscillations provide a window for understanding the brain dynamics that organize the flow of information from sensory to memory areas. While it has been suggested that gamma power reflects feedforward processing and alpha oscillations feedback control, it remains unknown how these oscillations dynamically interact. Magnetoencephalography (MEG) data was acquired from healthy subjects who were cued to either remember or not remember presented pictures. Our analysis revealed that in anticipation of a picture to be remembered, alpha power decreased while the cross-frequency coupling between gamma power and alpha phase increased. A measure of directionality between alpha phase and gamma power predicted individual ability to encode memory: stronger control of alpha phase over gamma power was associated with better memory. These findings demonstrate that encoding of visual information is reflected by a state determined by the interaction between alpha and gamma activity. PMID:27306959

  16. Formation of visual memories controlled by gamma power phase-locked to alpha oscillations

    NASA Astrophysics Data System (ADS)

    Park, Hyojin; Lee, Dong Soo; Kang, Eunjoo; Kang, Hyejin; Hahm, Jarang; Kim, June Sic; Chung, Chun Kee; Jiang, Haiteng; Gross, Joachim; Jensen, Ole

    2016-06-01

    Neuronal oscillations provide a window for understanding the brain dynamics that organize the flow of information from sensory to memory areas. While it has been suggested that gamma power reflects feedforward processing and alpha oscillations feedback control, it remains unknown how these oscillations dynamically interact. Magnetoencephalography (MEG) data was acquired from healthy subjects who were cued to either remember or not remember presented pictures. Our analysis revealed that in anticipation of a picture to be remembered, alpha power decreased while the cross-frequency coupling between gamma power and alpha phase increased. A measure of directionality between alpha phase and gamma power predicted individual ability to encode memory: stronger control of alpha phase over gamma power was associated with better memory. These findings demonstrate that encoding of visual information is reflected by a state determined by the interaction between alpha and gamma activity.

  17. Anorexia nervosa and body dysmorphic disorder are associated with abnormalities in processing visual information.

    PubMed

    Li, W; Lai, T M; Bohon, C; Loo, S K; McCurdy, D; Strober, M; Bookheimer, S; Feusner, J

    2015-07-01

    Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities--event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI)--to test for abnormal activity associated with early visual signaling. We acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems. AN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces. Results provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.

  18. Rehabilitation of Reading and Visual Exploration in Visual Field Disorders: Transfer or Specificity?

    ERIC Educational Resources Information Center

    Schuett, Susanne; Heywood, Charles A.; Kentridge, Robert W.; Dauner, Ruth; Zihl, Josef

    2012-01-01

    Reading and visual exploration impairments in unilateral homonymous visual field disorders are frequent and disabling consequences of acquired brain injury. Compensatory therapies have been developed, which allow patients to regain sufficient reading and visual exploration performance through systematic oculomotor training. However, it is still…

  19. SU-G-JeP3-05: Geometry Based Transperineal Ultrasound Probe Positioning for Image Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camps, S; With, P de; Verhaegen, F

    2016-06-15

    Purpose: The use of ultrasound (US) imaging in radiotherapy is not widespread, primarily due to the need for skilled operators performing the scans. Automation of probe positioning has the potential to remove this need and minimize operator dependence. We introduce an algorithm for obtaining a US probe position that allows good anatomical structure visualization based on clinical requirements. The first application is on 4D transperineal US images of prostate cancer patients. Methods: The algorithm calculates the probe position and orientation using anatomical information provided by a reference CT scan, always available in radiotherapy workflows. As initial test, we apply themore » algorithm on a CIRS pelvic US phantom to obtain a set of possible probe positions. Subsequently, five of these positions are randomly chosen and used to acquire actual US volumes of the phantom. Visual inspection of these volumes reveal if the whole prostate, and adjacent edges of bladder and rectum are fully visualized, as clinically required. In addition, structure positions on the acquired US volumes are compared to predictions of the algorithm. Results: All acquired volumes fulfill the clinical requirements as specified in the previous section. Preliminary quantitative evaluation was performed on thirty consecutive slices of two volumes, on which the structures are easily recognizable. The mean absolute distances (MAD) between actual anatomical structure positions and positions predicted by the algorithm were calculated. This resulted in MAD of 2.4±0.4 mm for prostate, 3.2±0.9 mm for bladder and 3.3±1.3 mm for rectum. Conclusion: Visual inspection and quantitative evaluation show that the algorithm is able to propose probe positions that fulfill all clinical requirements. The obtained MAD is on average 2.9 mm. However, during evaluation we assumed no errors in structure segmentation and probe positioning. In future steps, accurate estimation of these errors will allow for better evaluation of the achieved accuracy.« less

  20. Use of optical aids by visually impaired students: social and cultural factors.

    PubMed

    Monteiro, Gelse Beatriz Martins; Temporini, Edméa Rita; de Carvalho, Keila Monteiro

    2006-01-01

    To identify conceptions, social and cultural factors regarding the use of optical aids by visually impaired students and to present information to health and educational professionals. Qualitative research using spontaneous theater (interactive theater modality based on improvisation) as research instrument. To analyze data, an adapted form of the collective subject discourse technique - procedures for organization of verbal data - was applied. Scenes, gestures, expressions, silences and behaviors were added to the original proposal. The study population included all visually impaired students from elementary public schools, aged 10 to 14 years who attended a resource room in a São Paulo state city. The students were examined at a university low vision service. Little knowledge about the impairment and difficult adaptation to use of optical aids were identified. The students' behavior showed denial of own problems, discomfort on public use of aids and lack of participation in own health decisions. Analysis through spontaneous theater session allows the professional to gather information which is not possible to acquire in the health assistance atmosphere. Needs, difficulties and barriers the users found before the prescribed treatment were identified.

  1. Top-down influence on the visual cortex of the blind during sensory substitution

    PubMed Central

    Murphy, Matthew C.; Nau, Amy C.; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S.; Chan, Kevin C.

    2017-01-01

    Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. PMID:26584776

  2. Dual-modality imaging of function and physiology

    NASA Astrophysics Data System (ADS)

    Hasegawa, Bruce H.; Iwata, Koji; Wong, Kenneth H.; Wu, Max C.; Da Silva, Angela; Tang, Hamilton R.; Barber, William C.; Hwang, Andrew B.; Sakdinawat, Anne E.

    2002-04-01

    Dual-modality imaging is a technique where computed tomography or magnetic resonance imaging is combined with positron emission tomography or single-photon computed tomography to acquire structural and functional images with an integrated system. The data are acquired during a single procedure with the patient on a table viewed by both detectors to facilitate correlation between the structural and function images. The resulting data can be useful for localization for more specific diagnosis of disease. In addition, the anatomical information can be used to compensate the correlated radionuclide data for physical perturbations such as photon attenuation, scatter radiation, and partial volume errors. Thus, dual-modality imaging provides a priori information that can be used to improve both the visual quality and the quantitative accuracy of the radionuclide images. Dual-modality imaging systems also are being developed for biological research that involves small animals. The small-animal dual-modality systems offer advantages for measurements that currently are performed invasively using autoradiography and tissue sampling. By acquiring the required data noninvasively, dual-modality imaging has the potential to allow serial studies in a single animal, to perform measurements with fewer animals, and to improve the statistical quality of the data.

  3. Refractive Errors Affect the Vividness of Visual Mental Images

    PubMed Central

    Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia

    2013-01-01

    The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception. PMID:23755186

  4. Refractive errors affect the vividness of visual mental images.

    PubMed

    Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia

    2013-01-01

    The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception.

  5. Comparison Between RGB and Rgb-D Cameras for Supporting Low-Cost Gnss Urban Navigation

    NASA Astrophysics Data System (ADS)

    Rossi, L.; De Gaetani, C. I.; Pagliari, D.; Realini, E.; Reguzzoni, M.; Pinto, L.

    2018-05-01

    A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation) and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.

  6. Combination of individual tree detection and area-based approach in imputation of forest variables using airborne laser data

    NASA Astrophysics Data System (ADS)

    Vastaranta, Mikko; Kankare, Ville; Holopainen, Markus; Yu, Xiaowei; Hyyppä, Juha; Hyyppä, Hannu

    2012-01-01

    The two main approaches to deriving forest variables from laser-scanning data are the statistical area-based approach (ABA) and individual tree detection (ITD). With ITD it is feasible to acquire single tree information, as in field measurements. Here, ITD was used for measuring training data for the ABA. In addition to automatic ITD (ITD auto), we tested a combination of ITD auto and visual interpretation (ITD visual). ITD visual had two stages: in the first, ITD auto was carried out and in the second, the results of the ITD auto were visually corrected by interpreting three-dimensional laser point clouds. The field data comprised 509 circular plots ( r = 10 m) that were divided equally for testing and training. ITD-derived forest variables were used for training the ABA and the accuracies of the k-most similar neighbor ( k-MSN) imputations were evaluated and compared with the ABA trained with traditional measurements. The root-mean-squared error (RMSE) in the mean volume was 24.8%, 25.9%, and 27.2% with the ABA trained with field measurements, ITD auto, and ITD visual, respectively. When ITD methods were applied in acquiring training data, the mean volume, basal area, and basal area-weighted mean diameter were underestimated in the ABA by 2.7-9.2%. This project constituted a pilot study for using ITD measurements as training data for the ABA. Further studies are needed to reduce the bias and to determine the accuracy obtained in imputation of species-specific variables. The method could be applied in areas with sparse road networks or when the costs of fieldwork must be minimized.

  7. Therapy for nystagmus.

    PubMed

    Thurtell, Matthew J; Leigh, R John

    2010-12-01

    Pathological forms of nystagmus and their visual consequences can be treated using pharmacological, optical, and surgical approaches. Acquired periodic alternating nystagmus improves following treatment with baclofen, and downbeat nystagmus may improve following treatment with aminopyridines. Gabapentin and memantine are helpful in reducing acquired pendular nystagmus due to multiple sclerosis. Ocular oscillations in oculopalatal tremor may also improve following treatment with memantine or gabapentin. The infantile nystagmus syndrome (INS) may have only a minor impact on vision if "foveation periods" are well developed, but symptomatic patients may benefit from treatment with gabapentin, memantine, or base-out prisms to induce convergence. Several surgical therapies are also reported to improve INS, but selection of the optimal treatment depends on careful evaluation of visual acuity and nystagmus intensity in various gaze positions. Electro-optical devices are a promising and novel approach for treating the visual consequences of acquired forms of nystagmus.

  8. How Scientists Develop Competence in Visual Communication

    ERIC Educational Resources Information Center

    Ostergren, Marilyn

    2013-01-01

    Visuals (maps, charts, diagrams and illustrations) are an important tool for communication in most scientific disciplines, which means that scientists benefit from having strong visual communication skills. This dissertation examines the nature of competence in visual communication and the means by which scientists acquire this competence. This…

  9. Pollen structure visualization using high-resolution laboratory-based hard X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Qiong; Gluch, Jürgen; Krüger, Peter

    A laboratory-based X-ray microscope is used to investigate the 3D structure of unstained whole pollen grains. For the first time, high-resolution laboratory-based hard X-ray microscopy is applied to study pollen grains. Based on the efficient acquisition of statistically relevant information-rich images using Zernike phase contrast, both surface- and internal structures of pine pollen - including exine, intine and cellular structures - are clearly visualized. The specific volumes of these structures are calculated from the tomographic data. The systematic three-dimensional study of pollen grains provides morphological and structural information about taxonomic characters that are essential in palynology. Such studies have amore » direct impact on disciplines such as forestry, agriculture, horticulture, plant breeding and biodiversity. - Highlights: • The unstained whole pine pollen was visualized by high-resolution laboratory-based HXRM for the first time. • The comparison study of pollen grains by LM, SEM and high-resolution laboratory-based HXRM. • Phase contrast imaging provides significantly higher contrast of the raw images compared to absorption contrast imaging. • Surface and internal structure of the pine pollen including exine, intine and cellular structures are clearly visualized. • 3D volume data of unstained whole pollen grains are acquired and the specific volumes of the different layer are calculated.« less

  10. Changing viewer perspectives reveals constraints to implicit visual statistical learning.

    PubMed

    Jiang, Yuhong V; Swallow, Khena M

    2014-10-07

    Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.

  11. Finding regions of interest in pathological images: an attentional model approach

    NASA Astrophysics Data System (ADS)

    Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo

    2009-02-01

    This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.

  12. Top-down influence on the visual cortex of the blind during sensory substitution.

    PubMed

    Murphy, Matthew C; Nau, Amy C; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S; Chan, Kevin C

    2016-01-15

    Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Metabolic rate and body size are linked with perception of temporal information☆

    PubMed Central

    Healy, Kevin; McNally, Luke; Ruxton, Graeme D.; Cooper, Natalie; Jackson, Andrew L.

    2013-01-01

    Body size and metabolic rate both fundamentally constrain how species interact with their environment, and hence ultimately affect their niche. While many mechanisms leading to these constraints have been explored, their effects on the resolution at which temporal information is perceived have been largely overlooked. The visual system acts as a gateway to the dynamic environment and the relative resolution at which organisms are able to acquire and process visual information is likely to restrict their ability to interact with events around them. As both smaller size and higher metabolic rates should facilitate rapid behavioural responses, we hypothesized that these traits would favour perception of temporal change over finer timescales. Using critical flicker fusion frequency, the lowest frequency of flashing at which a flickering light source is perceived as constant, as a measure of the maximum rate of temporal information processing in the visual system, we carried out a phylogenetic comparative analysis of a wide range of vertebrates that supported this hypothesis. Our results have implications for the evolution of signalling systems and predator–prey interactions, and, combined with the strong influence that both body mass and metabolism have on a species' ecological niche, suggest that time perception may constitute an important and overlooked dimension of niche differentiation. PMID:24109147

  14. ON THE PERCEPTION OF PROBABLE THINGS

    PubMed Central

    Albright, Thomas D.

    2012-01-01

    SUMMARY Perception is influenced both by the immediate pattern of sensory inputs and by memories acquired through prior experiences with the world. Throughout much of its illustrious history, however, study of the cellular basis of perception has focused on neuronal structures and events that underlie the detection and discrimination of sensory stimuli. Relatively little attention has been paid to the means by which memories interact with incoming sensory signals. Building upon recent neurophysiological/behavioral studies of the cortical substrates of visual associative memory, I propose a specific functional process by which stored information about the world supplements sensory inputs to yield neuronal signals that can account for visual perceptual experience. This perspective represents a significant shift in the way we think about the cellular bases of perception. PMID:22542178

  15. Real-Time Enrollment Dashboard For Multisite Clinical Trials.

    PubMed

    Mattingly, William A; Kelley, Robert R; Wiemken, Timothy L; Chariker, Julia H; Peyrani, Paula; Guinn, Brian E; Binford, Laura E; Buckner, Kimberley; Ramirez, Julio

    2015-10-30

    Achieving patient recruitment goals are critical for the successful completion of a clinical trial. We designed and developed a web-based dashboard for assisting in the management of clinical trial screening and enrollment. We use the dashboard to assist in the management of two observational studies of community-acquired pneumonia. Clinical research associates and managers using the dashboard were surveyed to determine its effectiveness as compared with traditional direct communication. The dashboard has been in use since it was first introduced in May of 2014. Of the 23 staff responding to the survey, 77% felt that it was easier or much easier to use the dashboard for communication than to use direct communication. We have designed and implemented a visualization dashboard for managing multi-site clinical trial enrollment in two community acquired pneumonia studies. Information dashboards are a useful tool for clinical trial management. They can be used as a standalone trial information tool or included into a larger management system.

  16. Audiovisual associations alter the perception of low-level visual motion

    PubMed Central

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  17. Adaptive Optics Analysis of Visual Benefit with Higher-order Aberrations Correction of Human Eye - Poster Paper

    NASA Astrophysics Data System (ADS)

    Xue, Lixia; Dai, Yun; Rao, Xuejun; Wang, Cheng; Hu, Yiyun; Liu, Qian; Jiang, Wenhan

    2008-01-01

    Higher-order aberrations correction can improve visual performance of human eye to some extent. To evaluate how much visual benefit can be obtained with higher-order aberrations correction we developed an adaptive optics vision simulator (AOVS). Dynamic real time optimized modal compensation was used to implement various customized higher-order ocular aberrations correction strategies. The experimental results indicate that higher-order aberrations correction can improve visual performance of human eye comparing with only lower-order aberration correction but the improvement degree and higher-order aberration correction strategy are different from each individual. Some subjects can acquire great visual benefit when higher-order aberrations were corrected but some subjects acquire little visual benefit even though all higher-order aberrations were corrected. Therefore, relative to general lower-order aberrations correction strategy, customized higher-order aberrations correction strategy is needed to obtain optimal visual improvement for each individual. AOVS provides an effective tool for higher-order ocular aberrations optometry for customized ocular aberrations correction.

  18. Haptograph Representation of Real-World Haptic Information by Wideband Force Control

    NASA Astrophysics Data System (ADS)

    Katsura, Seiichiro; Irie, Kouhei; Ohishi, Kiyoshi

    Artificial acquisition and reproduction of human sensations are basic technologies of communication engineering. For example, auditory information is obtained by a microphone, and a speaker reproduces it by artificial means. Furthermore, a video camera and a television make it possible to transmit visual sensation by broadcasting. On the contrary, since tactile or haptic information is subject to the Newton's “law of action and reaction” in the real world, a device which acquires, transmits, and reproduces the information has not been established. From the point of view, real-world haptics is the key technology for future haptic communication engineering. This paper proposes a novel acquisition method of haptic information named “haptograph”. The haptograph visualizes the haptic information like photograph. The proposed haptograph is applied to haptic recognition of the contact environment. A linear motor contacts to the surface of the environment and its reaction force is used to make a haptograph. A robust contact motion and sensor-less sensing of the reaction force are attained by using a disturbance observer. As a result, an encyclopedia of contact environment is attained. Since temporal and spatial analyses are conducted to represent haptic information as the haptograph, it is possible to be recognized and to be evaluated intuitively.

  19. Helmet-mounted displays in long-range-target visual acquisition

    NASA Astrophysics Data System (ADS)

    Wilkins, Donald F.

    1999-07-01

    Aircrews have always sought a tactical advantage within the visual range (WVR) arena -- usually defined as 'see the opponent first.' Even with radar and interrogation foe/friend (IFF) systems, the pilot who visually acquires his opponent first has a significant advantage. The Helmet Mounted Cueing System (HMCS) equipped with a camera offers an opportunity to correct the problems with the previous approaches. By utilizing real-time image enhancement technique and feeding the image to the pilot on the HMD, the target can be visually acquired well beyond the range provided by the unaided eye. This paper will explore the camera and display requirements for such a system and place those requirements within the context of other requirements, such as weight.

  20. Afferentation of the lateral nidopallium: A tracing study of a brain area involved in sexual imprinting in the zebra finch (Taeniopygia guttata).

    PubMed

    Sadananda, Monika; Bischof, Hans-Joachim

    2006-08-23

    The lateral forebrain of zebra finches that comprises parts of the lateral nidopallium and parts of the lateral mesopallium is supposed to be involved in the storage and processing of visual information acquired by an early learning process called sexual imprinting. This information is later used to select an appropriate sexual partner for courtship behavior. Being involved in such a complicated behavioral task, the lateral nidopallium should be an integrative area receiving input from many other regions of the brain. Our experiments indeed show that the lateral nidopallium receives input from a variety of telencephalic regions including the primary and secondary areas of both visual pathways, the globus pallidus, the caudolateral nidopallium functionally comparable to the prefrontal cortex, the caudomedial nidopallium involved in song perception and storage of song-related memories, and some parts of the arcopallium. There are also a number of thalamic, mesencephalic, and brainstem efferents including the catecholaminergic locus coeruleus and the unspecific activating reticular formation. The spatial distribution of afferents suggests a compartmentalization of the lateral nidopallium into several subdivisions. Based on its connections, the lateral nidopallium should be considered as an area of higher order processing of visual information coming from the tectofugal and the thalamofugal visual pathways. Other sensory modalities and also motivational factors from a variety of brain areas are also integrated here. These findings support the idea of an involvement of the lateral nidopallium in imprinting and the control of courtship behavior.

  1. Effects of action video game training on visual working memory.

    PubMed

    Blacker, Kara J; Curby, Kim M; Klobusicky, Elizabeth; Chein, Jason M

    2014-10-01

    The ability to hold visual information in mind over a brief delay is critical for acquiring information and navigating a complex visual world. Despite the ubiquitous nature of visual working memory (VWM) in our everyday lives, this system is fundamentally limited in capacity. Therefore, the potential to improve VWM through training is a growing area of research. An emerging body of literature suggests that extensive experience playing action video games yields a myriad of perceptual and attentional benefits. Several lines of converging work suggest that action video game play may influence VWM as well. The current study utilized a training paradigm to examine whether action video games cause improvements to the quantity and/or the quality of information stored in VWM. The results suggest that VWM capacity, as measured by a change detection task, is increased after action video game training, as compared with training on a control game, and that some improvement to VWM precision occurs with action game training as well. However, these findings do not appear to extend to a complex span measure of VWM, which is often thought to tap into higher-order executive skills. The VWM improvements seen in individuals trained on an action video game cannot be accounted for by differences in motivation or engagement, differential expectations, or baseline differences in demographics as compared with the control group used. In sum, action video game training represents a potentially unique and engaging platform by which this severely capacity-limited VWM system might be enhanced.

  2. Use of the Photo-Electromyogram to Objectively Diagnose and Monitor Treatment of Post-TBI Light Sensitivity

    DTIC Science & Technology

    2012-10-01

    in place. Mark Ginsberg, one of our local jewelry story owners has acquired 3D extruding printers for medical instrumentation applications and will...comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE...tested out our software, which was written to control the monitor brightness, duration, and color for each visual stimulus. The software has been

  3. Current Therapy of Acquired Ocular Toxoplasmosis: A Review.

    PubMed

    Lima, Guilherme Sturzeneker Cerqueira; Saraiva, Patricia Grativol Costa; Saraiva, Fábio Petersen

    2015-11-01

    Caused by the parasite Toxoplasma gondii, ocular toxoplasmosis (OT) is the most common form of posterior infectious uveitis. Combined antiparasitic therapy is the standard treatment for OT, but several other schemes have been proposed. The purpose of the present study was to review the literature on the treatment of OT and provide ophthalmologists with up-to-date information to help reduce OT-related visual morbidity. In conclusion, no ideal treatment scheme was identified; currently prescribed therapeutic schemes yield statistically similar functional outcomes.

  4. An update on acquired nystagmus.

    PubMed

    Rucker, Janet C

    2008-01-01

    Proper evaluation and treatment of acquired nystagmus requires accurate characterization of nystagmus type and visual effects. This review addresses important historical and examination features of nystagmus and current concepts of pathogenesis and treatment of gaze-evoked nystagmus, nystagmus due to vision loss, acquired pendular nystagmus, peripheral and central vestibular nystagmus, and periodic alternating nystagmus.

  5. Automatic Estimation of Volcanic Ash Plume Height using WorldView-2 Imagery

    NASA Technical Reports Server (NTRS)

    McLaren, David; Thompson, David R.; Davies, Ashley G.; Gudmundsson, Magnus T.; Chien, Steve

    2012-01-01

    We explore the use of machine learning, computer vision, and pattern recognition techniques to automatically identify volcanic ash plumes and plume shadows, in WorldView-2 imagery. Using information of the relative position of the sun and spacecraft and terrain information in the form of a digital elevation map, classification, the height of the ash plume can also be inferred. We present the results from applying this approach to six scenes acquired on two separate days in April and May of 2010 of the Eyjafjallajokull eruption in Iceland. These results show rough agreement with ash plume height estimates from visual and radar based measurements.

  6. Infographic Development by Accelerated Bachelor of Science in Nursing Students: An Innovative Technology-Based Approach to Public Health Education.

    PubMed

    Falk, Nancy L

    Health communications and baccalaureate nursing education are increasingly impacted by new technological tools. This article describes how an Accelerated Bachelor of Science in Nursing program incorporates an infographic assignment into a graduate-level online health information and technology course. Students create colorful, engaging infographics using words and visuals to communicate public health information. The assignment, which incorporates the use of data and evidence, provides students the opportunity to acquire new research and technology skills while gaining confidence creating and innovating. The finished products may be disseminated, serving as vehicles to influence public health and well-being.

  7. Using digital colour to increase the realistic appearance of SEM micrographs of bloodstains.

    PubMed

    Hortolà, Policarp

    2010-10-01

    Although in the scientific-research literature the micrographs from scanning electron microscopes (SEMs) are usually displayed in greyscale, the potential of colour resources provided by the SEM-coupled image-acquiring systems and, subsidiarily, by image-manipulation free softwares deserves be explored as a tool for colouring SEM micrographs of bloodstains. After acquiring greyscale SEM micrographs of a (dark red to the naked eye) human blood smear on grey chert, they were manually obtained in red tone using both the SEM-coupled image-acquiring system and an image-manipulation free software, as well as they were automatically generated in thermal tone using the SEM-coupled system. Red images obtained by the SEM-coupled system demonstrated lower visual-discrimination capability than the other coloured images, whereas those in red generated by the free software rendered better magnitude of scopic information than the red images generated by the SEM-coupled system. Thermal-tone images, although were further from the real sample colour than the red ones, not only increased their realistic appearance over the greyscale images, but also yielded the best visual-discrimination capability among all the coloured SEM micrographs, and fairly enhanced the relief effect of the SEM micrographs over both the greyscale and the red images. The application of digital colour by means of the facilities provided by an SEM-coupled image-acquiring system or, when required, by an image-manipulation free software provides a user-friendly, quick and inexpensive way of obtaining coloured SEM micrographs of bloodstains, avoiding to do sophisticated, time-consuming colouring procedures. Although this work was focused on bloodstains, well probably other monochromatic or quasi-monochromatic samples are also susceptible of increasing their realistic appearance by colouring them using the simple methods utilized in this study.

  8. Development of a vocabulary of object shapes in a child with a very-early-acquired visual agnosia: a unique case.

    PubMed

    Funnell, Elaine; Wilding, John

    2011-02-01

    We report a longitudinal study of an exceptional child (S.R.) whose early-acquired visual agnosia, following encephalitis at 8 weeks of age, did not prevent her from learning to construct an increasing vocabulary of visual object forms (drawn from different categories), albeit slowly. S.R. had problems perceiving subtle differences in shape; she was unable to segment local letters within global displays; and she would bring complex scenes close to her eyes: a symptom suggestive of an attempt to reduce visual crowding. Investigations revealed a robust ability to use the gestalt grouping factors of proximity and collinearity to detect fragmented forms in noisy backgrounds, compared with a very weak ability to segment fragmented forms on the basis of contrasts of shape. When contrasts in spatial grouping and shape were pitted against each other, shape made little contribution, consistent with problems in perceiving complex scenes, but when shape contrast was varied, and spatial grouping was held constant, S.R. showed the same hierarchy of difficulty as the controls, although her responses were slowed. This is the first report of a child's visual-perceptual development following very early neurological impairments to the visual cortex. Her ability to learn to perceive visual shape following damage at a rudimentary stage of perceptual development contrasts starkly with the loss of such ability in childhood cases of acquired visual agnosia that follow damage to the established perceptual system. Clearly, there is a critical period during which neurological damage to the highly active, early developing visual-perceptual system does not prevent but only impairs further learning.

  9. Attention modulates trans-saccadic integration.

    PubMed

    Stewart, Emma E M; Schütz, Alexander C

    2018-01-01

    With every saccade, humans must reconcile the low resolution peripheral information available before a saccade, with the high resolution foveal information acquired after the saccade. While research has shown that we are able to integrate peripheral and foveal vision in a near-optimal manner, it is still unclear which mechanisms may underpin this important perceptual process. One potential mechanism that may moderate this integration process is visual attention. Pre-saccadic attention is a well documented phenomenon, whereby visual attention shifts to the location of an upcoming saccade before the saccade is executed. While it plays an important role in other peri-saccadic processes such as predictive remapping, the role of attention in the integration process is as yet unknown. This study aimed to determine whether the presentation of an attentional distractor during a saccade impaired trans-saccadic integration, and to measure the time-course of this impairment. Results showed that presenting an attentional distractor impaired integration performance both before saccade onset, and during the saccade, in selected subjects who showed integration in the absence of a distractor. This suggests that visual attention may be a mechanism that facilitates trans-saccadic integration. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  10. Functional correlates of musical and visual ability in frontotemporal dementia.

    PubMed

    Miller, B L; Boone, K; Cummings, J L; Read, S L; Mishkin, F

    2000-05-01

    The emergence of new skills in the setting of dementia suggests that loss of function in one brain area can release new functions elsewhere. To characterise 12 patients with frontotemporal dementia (FTD) who acquired, or sustained, new musical or visual abilities despite progression of their dementia. Twelve patients with FTD who acquired or maintained musical or artistic ability were compared with 46 patients with FTD in whom new or sustained ability was absent. The group with musical or visual ability performed better on visual, but worse on verbal tasks than did the other patients with FTD. Nine had asymmetrical left anterior dysfunction. Nine showed the temporal lobe variant of FTD. Loss of function in the left anterior temporal lobe may lead to facilitation of artistic or musical skills. Patients with the left-sided temporal lobe variant of FTD offer an unexpected window into the neurological mediation of visual and musical talents.

  11. Mars @ ASDC

    NASA Astrophysics Data System (ADS)

    Carraro, Francesco

    "Mars @ ASDC" is a project born with the goal of using the new web technologies to assist researches involved in the study of Mars. This project employs Mars map and javascript APIs provided by Google to visualize data acquired by space missions on the planet. So far, visualization of tracks acquired by MARSIS and regions observed by VIRTIS-Rosetta has been implemented. The main reason for the creation of this kind of tool is the difficulty in handling hundreds or thousands of acquisitions, like the ones from MARSIS, and the consequent difficulty in finding observations related to a particular region. This led to the development of a tool which allows to search for acquisitions either by defining the region of interest through a set of geometrical parameters or by manually selecting the region on the map through a few mouse clicks The system allows the visualization of tracks (acquired by MARSIS) or regions (acquired by VIRTIS-Rosetta) which intersect the user defined region. MARSIS tracks can be visualized both in Mercator and polar projections while the regions observed by VIRTIS can presently be visualized only in Mercator projection. The Mercator projection is the standard map provided by Google. The polar projections are provided by NASA and have been developed to be used in combination with APIs provided by Google The whole project has been developed following the "open source" philosophy: the client-side code which handles the functioning of the web page is written in javascript; the server-side code which executes the searches for tracks or regions is written in PHP and the DB which undergoes the system is MySQL.

  12. Evidence for Non-Opponent Coding of Colour Information in Human Visual Cortex: Selective Loss of "Green" Sensitivity in a Subject with Damaged Ventral Occipito-Temporal Cortex.

    PubMed

    Rauscher, Franziska G; Plant, Gordon T; James-Galton, Merle; Barbur, John L

    2011-01-01

    Damage to ventral occipito-temporal extrastriate visual cortex leads to the syndrome of prosopagnosia often with coexisting cerebral achromatopsia. A patient with this syndrome resulting in a left upper homonymous quadrantanopia, prosopagnosia, and incomplete achromatopsia is described. Chromatic sensitivity was assessed at a number of locations in the intact visual field using a dynamic luminance contrast masking technique that isolates the use of colour signals. In normal subjects chromatic detection thresholds form an elliptical contour when plotted in the Commission Internationale d'Eclairage, (x-y), chromaticity diagram. Because the extraction of colour signals in early visual processing involves opponent mechanisms, subjects with Daltonism (congenital red/green loss of sensitivity) show symmetric increase in thresholds towards the long wavelength ("red") and middle wavelength ("green") regions of the spectrum locus. This is also the case with acquired loss of chromatic sensitivity as a result of retinal or optic nerve disease. Our patient's results were an exception to this rule. Whilst his chromatic sensitivity in the central region of the visual field was reduced symmetrically for both "red/green" and "yellow/blue" directions in colour space, the subject's lower left quadrant showed a marked asymmetry in "red/green" thresholds with the greatest loss of sensitivity towards the "green" region of the spectrum locus. This spatially localized asymmetric loss of "green" but not "red" sensitivity has not been reported previously in human vision. Such loss is consistent with selective damage of neural substrates in the visual cortex that process colour information, but are spectrally non-opponent.

  13. Music and words in the visual cortex: The impact of musical expertise.

    PubMed

    Mongelli, Valeria; Dehaene, Stanislas; Vinckier, Fabien; Peretz, Isabelle; Bartolomeo, Paolo; Cohen, Laurent

    2017-01-01

    How does the human visual system accommodate expertise for two simultaneously acquired symbolic systems? We used fMRI to compare activations induced in the visual cortex by musical notation, written words and other classes of objects, in professional musicians and in musically naïve controls. First, irrespective of expertise, selective activations for music were posterior and lateral to activations for words in the left occipitotemporal cortex. This indicates that symbols characterized by different visual features engage distinct cortical areas. Second, musical expertise increased the volume of activations for music and led to an anterolateral displacement of word-related activations. In musicians, there was also a dramatic increase of the brain-scale networks connected to the music-selective visual areas. Those findings reveal that acquiring a double visual expertise involves an expansion of category-selective areas, the development of novel long-distance functional connectivity, and possibly some competition between categories for the colonization of cortical space. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Alterations in visual cortical activation and connectivity with prefrontal cortex during working memory updating in major depressive disorder.

    PubMed

    Le, Thang M; Borghi, John A; Kujawa, Autumn J; Klein, Daniel N; Leung, Hoi-Chung

    2017-01-01

    The present study examined the impacts of major depressive disorder (MDD) on visual and prefrontal cortical activity as well as their connectivity during visual working memory updating and related them to the core clinical features of the disorder. Impairment in working memory updating is typically associated with the retention of irrelevant negative information which can lead to persistent depressive mood and abnormal affect. However, performance deficits have been observed in MDD on tasks involving little or no demand on emotion processing, suggesting dysfunctions may also occur at the more basic level of information processing. Yet, it is unclear how various regions in the visual working memory circuit contribute to behavioral changes in MDD. We acquired functional magnetic resonance imaging data from 18 unmedicated participants with MDD and 21 age-matched healthy controls (CTL) while they performed a visual delayed recognition task with neutral faces and scenes as task stimuli. Selective working memory updating was manipulated by inserting a cue in the delay period to indicate which one or both of the two memorized stimuli (a face and a scene) would remain relevant for the recognition test. Our results revealed several key findings. Relative to the CTL group, the MDD group showed weaker postcue activations in visual association areas during selective maintenance of face and scene working memory. Across the MDD subjects, greater rumination and depressive symptoms were associated with more persistent activation and connectivity related to no-longer-relevant task information. Classification of postcue spatial activation patterns of the scene-related areas was also less consistent in the MDD subjects compared to the healthy controls. Such abnormalities appeared to result from a lack of updating effects in postcue functional connectivity between prefrontal and scene-related areas in the MDD group. In sum, disrupted working memory updating in MDD was revealed by alterations in activity patterns of the visual association areas, their connectivity with the prefrontal cortex, and their relationship with core clinical characteristics. These results highlight the role of information updating deficits in the cognitive control and symptomatology of depression.

  15. Small Aircraft Data Distribution System

    NASA Technical Reports Server (NTRS)

    Chazanoff, Seth L.; Dinardo, Steven J.

    2012-01-01

    The CARVE Small Aircraft Data Distribution System acquires the aircraft location and attitude data that is required by the various programs running on a distributed network. This system distributes the data it acquires to the data acquisition programs for inclusion in their data files. It uses UDP (User Datagram Protocol) to broadcast data over a LAN (Local Area Network) to any programs that might have a use for the data. The program is easily adaptable to acquire additional data and log that data to disk. The current version also drives displays using precision pitch and roll information to aid the pilot in maintaining a level-level attitude for radar/radiometer mapping beyond the degree available by flying visually or using a standard gyro-driven attitude indicator. The software is designed to acquire an array of data to help the mission manager make real-time decisions as to the effectiveness of the flight. This data is displayed for the mission manager and broadcast to the other experiments on the aircraft for inclusion in their data files. The program also drives real-time precision pitch and roll displays for the pilot and copilot to aid them in maintaining the desired attitude, when required, during data acquisition on mapping lines.

  16. Fourier domain image fusion for differential X-ray phase-contrast breast imaging.

    PubMed

    Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne

    2017-04-01

    X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Characteristics of implicit chaining in cotton-top tamarins (Saguinus oedipus).

    PubMed

    Locurto, Charles; Gagne, Matthew; Nutile, Lauren

    2010-07-01

    In human cognition there has been considerable interest in observing the conditions under which subjects learn material without explicit instructions to learn. In the present experiments, we adapted this issue to nonhumans by asking what subjects learn in the absence of explicit reinforcement for correct responses. Two experiments examined the acquisition of sequence information by cotton-top tamarins (Saguinus oedipus) when such learning was not demanded by the experimental contingencies. An implicit chaining procedure was used in which visual stimuli were presented serially on a touchscreen. Subjects were required to touch one stimulus to advance to the next stimulus. Stimulus presentations followed a pattern, but learning the pattern was not necessary for reinforcement. In Experiment 1 the chain consisted of five different visual stimuli that were presented in the same order on each trial. Each stimulus could occur at any one of six touchscreen positions. In Experiment 2 the same visual element was presented serially in the same five locations on each trial, thereby allowing a behavioral pattern to be correlated with the visual pattern. In this experiment two new tests, a Wild-Card test and a Running-Start test, were used to assess what was learned in this procedure. Results from both experiments indicated that tamarins acquired more information from an implicit chain than was required by the contingencies of reinforcement. These results contribute to the developing literature on nonhuman analogs of implicit learning.

  18. Visualizing morphogenesis in transgenic zebrafish embryos using BODIPY TR methyl ester dye as a vital counterstain for GFP.

    PubMed

    Cooper, Mark S; Szeto, Daniel P; Sommers-Herivel, Greg; Topczewski, Jacek; Solnica-Krezel, Lila; Kang, Hee-Chol; Johnson, Iain; Kimelman, David

    2005-02-01

    Green fluorescent protein (GFP) technology is rapidly advancing the study of morphogenesis, by allowing researchers to specifically focus on a subset of labeled cells within the living embryo. However, when imaging GFP-labeled cells using confocal microscopy, it is often essential to simultaneously visualize all of the cells in the embryo using dual-channel fluorescence to provide an embryological context for the cells expressing GFP. Although various counterstains are available, part of their fluorescence overlaps with the GFP emission spectra, making it difficult to clearly identify the cells expressing GFP. In this study, we report that a new fluorophore, BODIPY TR methyl ester dye, serves as a versatile vital counterstain for visualizing the cellular dynamics of morphogenesis within living GFP transgenic zebrafish embryos. The fluorescence of this photostable synthetic dye is spectrally separate from GFP fluorescence, allowing dual-channel, three-dimensional (3D) and four-dimensional (4D) confocal image data sets of living specimens to be easily acquired. These image data sets can be rendered subsequently into uniquely informative 3D and 4D visualizations using computer-assisted visualization software. We discuss a variety of immediate and potential applications of BODIPY TR methyl ester dye as a vital visualization counterstain for GFP in transgenic zebrafish embryos. Copyright 2004 Wiley-Liss, Inc.

  19. Towards clinical translation of augmented orthopedic surgery: from pre-op CT to intra-op x-ray via RGBD sensing

    NASA Astrophysics Data System (ADS)

    Tucker, Emerson; Fotouhi, Javad; Unberath, Mathias; Lee, Sing Chun; Fuerst, Bernhard; Johnson, Alex; Armand, Mehran; Osgood, Greg M.; Navab, Nassir

    2018-03-01

    Pre-operative CT data is available for several orthopedic and trauma interventions, and is mainly used to identify injuries and plan the surgical procedure. In this work we propose an intuitive augmented reality environment allowing visualization of pre-operative data during the intervention, with an overlay of the optical information from the surgical site. The pre-operative CT volume is first registered to the patient by acquiring a single C-arm X-ray image and using 3D/2D intensity-based registration. Next, we use an RGBD sensor on the C-arm to fuse the optical information of the surgical site with patient pre-operative medical data and provide an augmented reality environment. The 3D/2D registration of the pre- and intra-operative data allows us to maintain a correct visualization each time the C-arm is repositioned or the patient moves. An overall mean target registration error (mTRE) and standard deviation of 5.24 +/- 3.09 mm was measured averaged over 19 C-arm poses. The proposed solution enables the surgeon to visualize pre-operative data overlaid with information from the surgical site (e.g. surgeon's hands, surgical tools, etc.) for any C-arm pose, and negates issues of line-of-sight and long setup times, which are present in commercially available systems.

  20. Relational Learning in Children with Deafness and Cochlear Implants

    ERIC Educational Resources Information Center

    Almeida-Verdu, Ana Claudia; Huziwara, Edson M.; de Souza, Deisy G.; de Rose, Julio C.; Bevilacqua, Maria Cecilia; Lopes, Jair, Jr.; Alves, Cristiane O.; McIlvane, William J.

    2008-01-01

    This four-experiment series sought to evaluate the potential of children with neurosensory deafness and cochlear implants to exhibit auditory-visual and visual-visual stimulus equivalence relations within a matching-to-sample format. Twelve children who became deaf prior to acquiring language (prelingual) and four who became deaf afterwards…

  1. Mapping detailed 3D information onto high resolution SAR signatures

    NASA Astrophysics Data System (ADS)

    Anglberger, H.; Speck, R.

    2017-05-01

    Due to challenges in the visual interpretation of radar signatures or in the subsequent information extraction, a fusion with other data sources can be beneficial. The most accurate basis for a fusion of any kind of remote sensing data is the mapping of the acquired 2D image space onto the true 3D geometry of the scenery. In the case of radar images this is a challenging task because the coordinate system is based on the measured range which causes ambiguous regions due to layover effects. This paper describes a method that accurately maps the detailed 3D information of a scene to the slantrange-based coordinate system of imaging radars. Due to this mapping all the contributing geometrical parts of one resolution cell can be determined in 3D space. The proposed method is highly efficient, because computationally expensive operations can be directly performed on graphics card hardware. The described approach builds a perfect basis for sophisticated methods to extract data from multiple complimentary sensors like from radar and optical images, especially because true 3D information from whole cities will be available in the near future. The performance of the developed methods will be demonstrated with high resolution radar data acquired by the space-borne SAR-sensor TerraSAR-X.

  2. Vision In Stroke cohort: Profile overview of visual impairment.

    PubMed

    Rowe, Fiona J

    2017-11-01

    To profile the full range of visual disorders from a large prospective observation study of stroke survivors referred by stroke multidisciplinary teams to orthoptic services with suspected visual problems. Multicenter prospective study undertaken in 20 acute Trust hospitals. Standardized screening/referral forms and investigation forms documented data on referral signs and symptoms plus type and extent of visual impairment. Of 1,345 patients referred with suspected visual impairment, 915 were recruited (59% men; mean age at stroke onset 69 years [SD 14]). Initial visual assessment was at median 22 days post stroke onset. Eight percent had normal visual assessment. Of 92% with confirmed visual impairment, 24% had reduced central visual acuity <0.3 logMAR and 13.5% <0.5 logMAR. Acquired strabismus was noted in 16% and acquired ocular motility disorders in 68%. Peripheral visual field loss was present in 52%, most commonly homonymous hemianopia. Fifteen percent had visual inattention and 4.6% had other visual perceptual disorders. Overall 84% were visually symptomatic with visual field loss the most common complaint followed by blurred vision, reading difficulty, and diplopia. Treatment options were provided to all with confirmed visual impairment. Targeted advice was most commonly provided along with refraction, prisms, and occlusion. There are a wide range of visual disorders that occur following stroke and, frequently, with visual symptoms. There are equally a wide variety of treatment options available for these individuals. All stroke survivors require screening for visual impairment and warrant referral for specialist assessment and targeted treatment specific to the type of visual impairment.

  3. Aging affects the balance between goal-guided and habitual spatial attention.

    PubMed

    Twedell, Emily L; Koutstaal, Wilma; Jiang, Yuhong V

    2017-08-01

    Visual clutter imposes significant challenges to older adults in everyday tasks and often calls on selective processing of relevant information. Previous research has shown that both visual search habits and task goals influence older adults' allocation of spatial attention, but has not examined the relative impact of these two sources of attention when they compete. To examine how aging affects the balance between goal-driven and habitual attention, and to inform our understanding of different attentional subsystems, we tested young and older adults in an adapted visual search task involving a display laid flat on a desk. To induce habitual attention, unbeknownst to participants, the target was more often placed in one quadrant than in the others. All participants rapidly acquired habitual attention toward the high-probability quadrant. We then informed participants where the high-probability quadrant was and instructed them to search that screen location first-but pitted their habit-based, viewer-centered search against this instruction by requiring participants to change their physical position relative to the desk. Both groups prioritized search in the instructed location, but this effect was stronger in young adults than in older adults. In contrast, age did not influence viewer-centered search habits: the two groups showed similar attentional preference for the visual field where the target was most often found before. Aging disrupted goal-guided but not habitual attention. Product, work, and home design for people of all ages--but especially for older individuals--should take into account the strong viewer-centered nature of habitual attention.

  4. Achievable Rate Estimation of IEEE 802.11ad Visual Big-Data Uplink Access in Cloud-Enabled Surveillance Applications.

    PubMed

    Kim, Joongheon; Kim, Jong-Kook

    2016-01-01

    This paper addresses the computation procedures for estimating the impact of interference in 60 GHz IEEE 802.11ad uplink access in order to construct visual big-data database from randomly deployed surveillance camera sensing devices. The acquired large-scale massive visual information from surveillance camera devices will be used for organizing big-data database, i.e., this estimation is essential for constructing centralized cloud-enabled surveillance database. This performance estimation study captures interference impacts on the target cloud access points from multiple interference components generated by the 60 GHz wireless transmissions from nearby surveillance camera devices to their associated cloud access points. With this uplink interference scenario, the interference impacts on the main wireless transmission from a target surveillance camera device to its associated target cloud access point with a number of settings are measured and estimated under the consideration of 60 GHz radiation characteristics and antenna radiation pattern models.

  5. Fourier-based automatic alignment for improved Visual Cryptography schemes.

    PubMed

    Machizaud, Jacques; Chavel, Pierre; Fournel, Thierry

    2011-11-07

    In Visual Cryptography, several images, called "shadow images", that separately contain no information, are overlapped to reveal a shared secret message. We develop a method to digitally register one printed shadow image acquired by a camera with a purely digital shadow image, stored in memory. Using Fourier techniques derived from Fourier Optics concepts, the idea is to enhance and exploit the quasi periodicity of the shadow images, composed by a random distribution of black and white patterns on a periodic sampling grid. The advantage is to speed up the security control or the access time to the message, in particular in the cases of a small pixel size or of large numbers of pixels. Furthermore, the interest of visual cryptography can be increased by embedding the initial message in two shadow images that do not have identical mathematical supports, making manual registration impractical. Experimental results demonstrate the successful operation of the method, including the possibility to directly project the result onto the printed shadow image.

  6. Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.

    PubMed

    Souto, David; Kerzel, Dirk

    2013-02-06

    Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.

  7. Neuropathies of the optic nerve and visual evoked potentials with special reference to color vision and differential light threshold measured with the computer perimeter OCTOPUS.

    PubMed

    Wildberger, H

    1984-10-31

    The contrast evoked potentials (VEPs) to different check sizes were recorded in about 200 cases of discrete optic neuropathies (ON) of different origin. Differential light threshold (DLT) was tested with the computer perimeter OCTOPUS. Saturated and desaturated tests were applied to evaluate the degree of acquired color vision deficiency. Delayed VEP responses are not confined to optic neuritis (RBN) alone and the different latency times obtained from other ON are confluent. The delay may be due to demyelination, to an increasing dominance of paramacular VEP subcomponents or to an increasing dominance of the upper half-field responses. Recording with smaller check sizes has the advantage that discrete dysfunctions in the visual field (VF) center are more easily detected: a correlation between amplitudes and visual acuity is best in strabismic amblyopias, is less expressed in maculopathies of the retina and weak in ON. The absence or reduction of amplitudes to smaller check sizes, however, is an important indication of a disorder in the VF center of ON in an early or recovered stage. Acquired color vision defects of the tritan-like type are more confined to discrete ON, whereas the red/green type is reserved to more severe ON. The DLT of the VF center is reduced in a different, significant and non significant extent in discrete optic neuropathies and the correlation between DLT and visual acuity is weak. A careful numerical analysis is needed in types of discrete ON where the central DLT lies within normal statistical limits: a side difference of the DLT between the affected and the normal fellow eye is always present. Evaluation of visual fatigue effects and of the relative sensitivity loss of VF center and VF periphery may provide further diagnostic information.

  8. Moon Color Visualizations

    NASA Image and Video Library

    1996-01-29

    These color visualizations of the Moon were obtained by NASA Galileo spacecraft as it left the Earth after completing its first Earth Gravity Assist. The images were acquired Dec. 8-9, 1990. http://photojournal.jpl.nasa.gov/catalog/PIA00075

  9. Social information signaling by neurons in primate striatum.

    PubMed

    Klein, Jeffrey T; Platt, Michael L

    2013-04-22

    Social decisions depend on reliable information about others. Consequently, social primates are motivated to acquire information about the identity, social status, and reproductive quality of others. Neurophysiological and neuroimaging studies implicate the striatum in the motivational control of behavior. Neuroimaging studies specifically implicate the ventromedial striatum in signaling motivational aspects of social interaction. Despite this evidence, precisely how striatal neurons encode social information remains unknown. Therefore, we probed the activity of single striatal neurons in monkeys choosing between visual social information at the potential expense of fluid reward. We show for the first time that a population of neurons located primarily in medial striatum selectively signals social information. Surprisingly, representation of social information was unrelated to simultaneously expressed social preferences. A largely nonoverlapping population of neurons that was not restricted to the medial striatum signaled information about fluid reward. Our findings demonstrate that information about social context and nutritive reward are maintained largely independently in striatum, even when both influence decisions to execute a single action. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Informatics in radiology (infoRAD): navigating the fifth dimension: innovative interface for multidimensional multimodality image navigation.

    PubMed

    Rosset, Antoine; Spadola, Luca; Pysher, Lance; Ratib, Osman

    2006-01-01

    The display and interpretation of images obtained by combining three-dimensional data acquired with two different modalities (eg, positron emission tomography and computed tomography) in the same subject require complex software tools that allow the user to adjust the image parameters. With the current fast imaging systems, it is possible to acquire dynamic images of the beating heart, which add a fourth dimension of visual information-the temporal dimension. Moreover, images acquired at different points during the transit of a contrast agent or during different functional phases add a fifth dimension-functional data. To facilitate real-time image navigation in the resultant large multidimensional image data sets, the authors developed a Digital Imaging and Communications in Medicine-compliant software program. The open-source software, called OsiriX, allows the user to navigate through multidimensional image series while adjusting the blending of images from different modalities, image contrast and intensity, and the rate of cine display of dynamic images. The software is available for free download at http://homepage.mac.com/rossetantoine/osirix. (c) RSNA, 2006.

  11. Gaps in patient care practices to prevent hospital-acquired delirium.

    PubMed

    Alagiakrishnan, Kannayiram; Marrie, Thomas; Rolfson, Darryl; Coke, William; Camicioli, Richard; Duggan, D'Arcy; Launhardt, Bonnie; Fisher, Bruce; Gordon, Debbie; Hervas-Malo, Marilou; Magee, Bernice; Wiens, Cheryl

    2009-10-01

    To evaluate the current patient care practices that address the predisposing and precipitating factors contributing to the prevention of hospital-acquired delirium in the elderly. Prospective cohort (observational) study. Patients 65 years of age and older who were admitted to medical teaching units at the University of Alberta Hospital in Edmonton over a period of 7 months and who were at risk of delirium. Medical teaching units at the University of Alberta. Demographic data and information on predisposing factors for hospital-acquired delirium were obtained for all patients. Documented clinical practices that likely prevent common precipitants of delirium were also recorded. Of the 132 patients enrolled, 20 (15.2%) developed hospital-acquired delirium. At the time of admission several predisposing factors were not documented (eg, possible cognitive impairment 16 [12%], visual impairment 52 [39.4%], and functional status of activities of daily living 99 [75.0%]). Recorded precipitating factors included catheter use, screening for dehydration, and medications. Catheters were used in 35 (26.5%) patients, and fluid intake-and-output charting assessed dehydration in 57 (43.2%) patients. At the time of admission there was no documentation of hearing status in 69 (52.3%) patients and aspiration risk in 104 (78.8%) patients. After admission, reorientation measures were documented in only 16 (12.1%) patients. Although all patients had brief mental status evaluations performed once daily, this was not noted to occur twice daily (which would provide important information about fluctuation of mental status) and there was no formal attention span testing. In this study, hospital-acquired delirium was also associated with increased mortality (P < .004), increased length of stay (P < .007), and increased institutionalization (P < .027). Gaps were noted in patient care practices that might contribute to hospital-acquired delirium and also in measures to identify the development of delirium at an earlier stage. Effort should be made to educate health professionals to identify the predisposing and precipitating factors, and to screen for delirium. This might improve the prevention of delirium.

  12. Development of Object Permanence in Visually Impaired Infants.

    ERIC Educational Resources Information Center

    Rogers, S. J.; Puchalski, C. B.

    1988-01-01

    Development of object permanence skills was examined longitudinally in 20 visually impaired infants (ages 4-25 months). Order of skill acquisition and span of time required to master skills paralleled that of sighted infants, but the visually impaired subjects were 8-12 months older than sighted counterparts when similar skills were acquired.…

  13. A Multimodal Neural Network Recruited by Expertise with Musical Notation

    ERIC Educational Resources Information Center

    Wong, Yetta Kwailing; Gauthier, Isabel

    2010-01-01

    Prior neuroimaging work on visual perceptual expertise has focused on changes in the visual system, ignoring possible effects of acquiring expert visual skills in nonvisual areas. We investigated expertise for reading musical notation, a skill likely to be associated with multimodal abilities. We compared brain activity in music-reading experts…

  14. Multisensory emotion perception in congenitally, early, and late deaf CI users

    PubMed Central

    Nava, Elena; Villwock, Agnes K.; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences. PMID:29023525

  15. Multisensory emotion perception in congenitally, early, and late deaf CI users.

    PubMed

    Fengler, Ineke; Nava, Elena; Villwock, Agnes K; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.

  16. Evidence for Non-Opponent Coding of Colour Information in Human Visual Cortex: Selective Loss of “Green” Sensitivity in a Subject with Damaged Ventral Occipito-Temporal Cortex

    PubMed Central

    Rauscher, Franziska G.; Plant, Gordon T.; James-Galton, Merle; Barbur, John L.

    2011-01-01

    Damage to ventral occipito-temporal extrastriate visual cortex leads to the syndrome of prosopagnosia often with coexisting cerebral achromatopsia. A patient with this syndrome resulting in a left upper homonymous quadrantanopia, prosopagnosia, and incomplete achromatopsia is described. Chromatic sensitivity was assessed at a number of locations in the intact visual field using a dynamic luminance contrast masking technique that isolates the use of colour signals. In normal subjects chromatic detection thresholds form an elliptical contour when plotted in the Commission Internationale d’Eclairage, (x-y), chromaticity diagram. Because the extraction of colour signals in early visual processing involves opponent mechanisms, subjects with Daltonism (congenital red/green loss of sensitivity) show symmetric increase in thresholds towards the long wavelength (“red”) and middle wavelength (“green”) regions of the spectrum locus. This is also the case with acquired loss of chromatic sensitivity as a result of retinal or optic nerve disease. Our patient’s results were an exception to this rule. Whilst his chromatic sensitivity in the central region of the visual field was reduced symmetrically for both “red/green” and “yellow/blue” directions in colour space, the subject’s lower left quadrant showed a marked asymmetry in “red/green” thresholds with the greatest loss of sensitivity towards the “green” region of the spectrum locus. This spatially localized asymmetric loss of “green” but not “red” sensitivity has not been reported previously in human vision. Such loss is consistent with selective damage of neural substrates in the visual cortex that process colour information, but are spectrally non-opponent. PMID:27956924

  17. How Ants Use Vision When Homing Backward.

    PubMed

    Schwarz, Sebastian; Mangan, Michael; Zeil, Jochen; Webb, Barbara; Wystrach, Antoine

    2017-02-06

    Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3-5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories. VIDEO ABSTRACT. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  18. Reduction of susceptibility-induced signal losses in multi-gradient-echo images: application to improved visualization of the subthalamic nucleus.

    PubMed

    Volz, Steffen; Hattingen, Elke; Preibisch, Christine; Gasser, Thomas; Deichmann, Ralf

    2009-05-01

    T2-weighted gradient echo (GE) images yield good contrast of iron-rich structures like the subthalamic nuclei due to microscopic susceptibility induced field gradients, providing landmarks for the exact placement of deep brain stimulation electrodes in Parkinson's disease treatment. An additional advantage is the low radio frequency (RF) exposure of GE sequences. However, T2-weighted images are also sensitive to macroscopic field inhomogeneities, resulting in signal losses, in particular in orbitofrontal and temporal brain areas, limiting anatomical information from these areas. In this work, an image correction method for multi-echo GE data based on evaluation of phase information for field gradient mapping is presented and tested in vivo on a 3 Tesla whole body MR scanner. In a first step, theoretical signal losses are calculated from the gradient maps and a pixelwise image intensity correction is performed. In a second step, intensity corrected images acquired at different echo times TE are combined using optimized weighting factors: in areas not affected by macroscopic field inhomogeneities, data acquired at long TE are weighted more strongly to achieve the contrast required. For large field gradients, data acquired at short TE are favored to avoid signal losses. When compared to the original data sets acquired at different TE and the respective intensity corrected data sets, the resulting combined data sets feature reduced signal losses in areas with major field gradients, while intensity profiles and a contrast-to-noise (CNR) analysis between subthalamic nucleus, red nucleus and the surrounding white matter demonstrate good contrast in deep brain areas.

  19. Development of a Disaster Information Visualization Dashboard: A Case Study of Three Typhoons in Taiwan in 2016

    NASA Astrophysics Data System (ADS)

    Su, Wen-Ray; Tsai, Yuan-Fan; Huang, Kuei-Chin; Hsieh, Ching-En

    2017-04-01

    To facilitate disaster response and enhance the effectiveness of disaster prevention and relief, people and emergency response personnel should be able to rapidly acquire and understand information when disasters occur. However, in existing disaster platforms information is typically presented in text tables, static charts, and maps with points. These formats do not make it easy for users to understand the overall situation. Therefore, this study converts data into human-readable charts by using data visualization techniques, and builds a disaster information dashboard that is concise, attractive and flexible. This information dashboard integrates temporally and spatially correlated data, disaster statistics according to category and county, lists of disasters, and any other relevant information. The graphs are animated and interactive. The dashboard allows users to filter the data according to their needs and thus to assimilate the information more rapidly. In this study, we applied the information dashboard to the analysis of landslides during three typhoon events in 2016: Typhoon Nepartak, Typhoon Meranti and Typhoon Megi. According to the statistical results in the dashboard, the order of frequency of the disaster categories in all three events combined was rock fall, roadbed loss, slope slump, road blockage and debris flow. Disasters occurred mainly in the areas that received the most rainfall. Typhoons Nepartak and Meranti mainly affected Taitung, and Typhoon Megi mainly affected Kaohsiung. The towns Xiulin, Fengbin, Fenglin and Guangfu in Hualian County were all issued with debris flow warnings in all three typhoon events. The disaster information dashboard developed in this study allows the user to rapidly assess the overall disaster situation. It clearly and concisely reveals interactions between time, space and disaster type, and also provides comprehensive details about the disaster. The dashboard provides a foundation for future disaster visualization, since it can combine and present real-time information of various types; as such it will strengthen decision making in disaster prevention management.

  20. Visual and Motor Recovery After "Cognitive Therapeutic Exercises" in Cortical Blindness: A Case Study.

    PubMed

    De Patre, Daniele; Van de Winckel, Ann; Panté, Franca; Rizzello, Carla; Zernitz, Marina; Mansour, Mariam; Zordan, Lara; Zeffiro, Thomas A; OʼConnor, Erin E; Bisson, Teresa; Lupi, Andrea; Perfetti, Carlo

    2017-07-01

    Spontaneous visual recovery is rare after cortical blindness. While visual rehabilitation may improve performance, no visual therapy has been widely adopted, as clinical outcomes are variable and rarely translate into improvements in activities of daily living (ADLs). We explored the potential value of a novel rehabilitation approach "cognitive therapeutic exercises" for cortical blindness. The subject of this case study was 48-year-old woman with cortical blindness and tetraplegia after cardiac arrest. Prior to the intervention, she was dependent in ADLs and poorly distinguished shapes and colors after 19 months of standard visual and motor rehabilitation. Computed tomographic images soon after symptom onset demonstrated acute infarcts in both occipital cortices. The subject underwent 8 months of intensive rehabilitation with "cognitive therapeutic exercises" consisting of discrimination exercises correlating sensory and visual information. Visual fields increased; object recognition improved; it became possible to watch television; voluntary arm movements improved in accuracy and smoothness; walking improved; and ADL independence and self-reliance increased. Subtraction of neuroimaging acquired before and after rehabilitation showed that focal glucose metabolism increases bilaterally in the occipital poles. This study demonstrates feasibility of "cognitive therapeutic exercises" in an individual with cortical blindness, who experienced impressive visual and sensorimotor recovery, with marked ADL improvement, more than 2 years after ischemic cortical damage.Video Abstract available for additional insights from the authors (see Video, Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A173).

  1. The role of visual deprivation and experience on the performance of sensory substitution devices.

    PubMed

    Stronks, H Christiaan; Nau, Amy C; Ibbotson, Michael R; Barnes, Nick

    2015-10-22

    It is commonly accepted that the blind can partially compensate for their loss of vision by developing enhanced abilities with their remaining senses. This visual compensation may be related to the fact that blind people rely on their other senses in everyday life. Many studies have indeed shown that experience plays an important role in visual compensation. Numerous neuroimaging studies have shown that the visual cortices of the blind are recruited by other functional brain areas and can become responsive to tactile or auditory input instead. These cross-modal plastic changes are more pronounced in the early blind compared to late blind individuals. The functional consequences of cross-modal plasticity on visual compensation in the blind are debated, as are the influences of various etiologies of vision loss (i.e., blindness acquired early or late in life). Distinguishing between the influences of experience and visual deprivation on compensation is especially relevant for rehabilitation of the blind with sensory substitution devices. The BrainPort artificial vision device and The vOICe are assistive devices for the blind that redirect visual information to another intact sensory system. Establishing how experience and different etiologies of vision loss affect the performance of these devices may help to improve existing rehabilitation strategies, formulate effective selection criteria and develop prognostic measures. In this review we will discuss studies that investigated the influence of training and visual deprivation on the performance of various sensory substitution approaches. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Internal structure visualization of flow and flame by process tomography and PLIF data fusion

    NASA Astrophysics Data System (ADS)

    Liu, J.; Liu, Shi; Sun, S.; Pan, X.; Schlaberg, I. H. I.

    2018-02-01

    To address the increasing demands on pollution control and energy saving, the study of low-emission and high-efficiency burners has been emphasized worldwide. Swirl-induced environmental burners (EV-burners), have notable features aligned with these requirements. In this study, an EV burner is investigated by both an ECT system and an OH-PLIF system. The aim is to detect the structure of a flame and obtain more information about the combustion process in an EV burner. 3D ECT sensitivity maps are generated for the measurement and OH-PLIF images are acquired in the same combustion zone as for the ECT measurements. The experimental images of a flame by ECT are in good agreement with the OH radical distribution pictures captured by OH-PLIF, which provide a mutual verification of the visualization method.

  3. Visualizing Safeguards: Software for Conceptualizing and Communicating Safeguards Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallucci, N.

    2015-07-12

    The nuclear programs of states are complex and varied, comprising a wide range of fuel cycles and facilities. Also varied are the types and terms of states’ safeguards agreements with the IAEA, each placing different limits on the inspectorate’s access to these facilities. Such nuances make it difficult to draw policy significance from the ground-level nuclear activities of states, or to attribute ground-level outcomes to the implementation of specific policies or initiatives. While acquiring a firm understanding of these relationships is critical to evaluating and formulating effective policy, doing so requires collecting and synthesizing large bodies of information. Maintaining amore » comprehensive working knowledge of the facilities comprising even a single state’s nuclear program poses a challenge, yet marrying this information with relevant safeguards and verification information is more challenging still. To facilitate this task, Brookhaven National Laboratory has developed a means of capturing the development, operation, and safeguards history of all the facilities comprising a state’s nuclear program in a single graphic. The resulting visualization offers a useful reference tool to policymakers and analysts alike, providing a chronology of states’ nuclear development and an easily digestible history of verification activities across their fuel cycles.« less

  4. CLICK: The new USGS center for LIDAR information coordination and knowledge

    USGS Publications Warehouse

    Stoker, Jason M.; Greenlee, Susan K.; Gesch, Dean B.; Menig, Jordan C.

    2006-01-01

    Elevation data is rapidly becoming an important tool for the visualization and analysis of geographic information. The creation and display of three-dimensional models representing bare earth, vegetation, and structures have become major requirements for geographic research in the past few years. Light Detection and Ranging (lidar) has been increasingly accepted as an effective and accurate technology for acquiring high-resolution elevation data for bare earth, vegetation, and structures. Lidar is an active remote sensing system that records the distance, or range, of a laser fi red from an airborne or space borne platform such as an airplane, helicopter or satellite to objects or features on the Earth’s surface. By converting lidar data into bare ground topography and vegetation or structural morphologic information, extremely accurate, high-resolution elevation models can be derived to visualize and quantitatively represent scenes in three dimensions. In addition to high-resolution digital elevation models (Evans et al., 2001), other lidar-derived products include quantitative estimates of vegetative features such as canopy height, canopy closure, and biomass (Lefsky et al., 2002), and models of urban areas such as building footprints and three-dimensional city models (Maas, 2001).

  5. Acquired prior knowledge modulates audiovisual integration.

    PubMed

    Van Wanrooij, Marc M; Bremen, Peter; John Van Opstal, A

    2010-05-01

    Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.

  6. Gaze control for an active camera system by modeling human pursuit eye movements

    NASA Astrophysics Data System (ADS)

    Toelg, Sebastian

    1992-11-01

    The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.

  7. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects.

    PubMed

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  8. Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors

    PubMed Central

    Berenguer, Yerai; Payá, Luis; Ballesta, Mónica; Reinoso, Oscar

    2015-01-01

    This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. PMID:26501289

  9. Attitude and position estimation on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Ali, Khaled S.; Vanelli, C. Anthony; Biesiadecki, Jeffrey J.; Maimone, Mark W.; Yang Cheng, A.; San Martin, Miguel; Alexander, James W.

    2005-01-01

    NASA/JPL 's Mars Exploration Rovers acquire their attitude upon command and autonomously propagate their attitude and position. The rovers use accelerometers and images of the sun to acquire attitude, autonomously searching the sky for the sun with a pointable camera. To propagate the attitude and position the rovers use either accelerometer and gyro readings or gyro readings and wheel odometiy, depending on the nature of the movement ground operators are commanding. Where necessary, visual odometry is performed on images to fine tune the position updates, particularly in high slip environments. The capability also exists for visual odometry attitude updates. This paper describes the techniques used by the rovers to acquire and maintain attitude and position knowledge, the accuracy which is obtainable, and lessons learned after more than one year in operation.

  10. The man who mistook his neuropsychologist for a popstar: when configural processing fails in acquired prosopagnosia

    PubMed Central

    Jansari, Ashok; Miller, Scott; Pearce, Laura; Cobb, Stephanie; Sagiv, Noam; Williams, Adrian L.; Tree, Jeremy J.; Hanley, J. Richard

    2015-01-01

    We report the case of an individual with acquired prosopagnosia who experiences extreme difficulties in recognizing familiar faces in everyday life despite excellent object recognition skills. Formal testing indicates that he is also severely impaired at remembering pre-experimentally unfamiliar faces and that he takes an extremely long time to identify famous faces and to match unfamiliar faces. Nevertheless, he performs as accurately and quickly as controls at identifying inverted familiar and unfamiliar faces and can recognize famous faces from their external features. He also performs as accurately as controls at recognizing famous faces when fracturing conceals the configural information in the face. He shows evidence of impaired global processing but normal local processing of Navon figures. This case appears to reflect the clearest example yet of an acquired prosopagnosic patient whose familiar face recognition deficit is caused by a severe configural processing deficit in the absence of any problems in featural processing. These preserved featural skills together with apparently intact visual imagery for faces allow him to identify a surprisingly large number of famous faces when unlimited time is available. The theoretical implications of this pattern of performance for understanding the nature of acquired prosopagnosia are discussed. PMID:26236212

  11. Desktop Cloud Visualization: the new technology to remote access 3D interactive applications in the Cloud.

    PubMed

    Torterolo, Livia; Ruffino, Francesco

    2012-01-01

    In the proposed demonstration we will present DCV (Desktop Cloud Visualization): a unique technology that allows users to remote access 2D and 3D interactive applications over a standard network. This allows geographically dispersed doctors work collaboratively and to acquire anatomical or pathological images and visualize them for further investigations.

  12. Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.

    PubMed

    Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J

    2016-10-24

    In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Proportional Reasoning and the Visually Impaired

    ERIC Educational Resources Information Center

    Hilton, Geoff; Hilton, Annette; Dole, Shelley L.; Goos, Merrilyn; O'Brien, Mia

    2012-01-01

    Proportional reasoning is an important aspect of formal thinking that is acquired during the developmental years that approximate the middle years of schooling. Students who fail to acquire sound proportional reasoning often experience difficulties in subjects that require quantitative thinking, such as science, technology, engineering, and…

  14. Visualization of bioelectric phenomena.

    PubMed

    Palmer, T C; Simpson, E V; Kavanagh, K M; Smith, W M

    1992-01-01

    Biomedical investigators are currently able to acquire and analyze physiological and anatomical data from three-dimensional structures in the body. Often, multiple kinds of data can be recorded simultaneously. The usefulness of this information, either for exploratory viewing or for presentation to others, is limited by the lack of techniques to display it in intuitive, accessible formats. Unfortunately, the complexity of scientific visualization techniques and the inflexibility of commercial packages deter investigators from using sophisticated visualization methods that could provide them added insight into the mechanisms of the phenomena under study. Also, the sheer volume of such data is a problem. High-performance computing resources are often required for storage and processing, in addition to visualization. This chapter describes a novel, language-based interface that allows scientists with basic programming skills to classify and render multivariate volumetric data with a modest investment in software training. The interface facilitates data exploration by enabling experimentation with various algorithms to compute opacity and color from volumetric data. The value of the system is demonstrated using data from cardiac mapping studies, in which multiple electrodes are placed in an on the heart to measure the cardiac electrical activity intrinsic to the heart and its response to external stimulation.

  15. Auditory-visual stimulus pairing enhances perceptual learning in a songbird.

    PubMed

    Hultsch; Schleuss; Todt

    1999-07-01

    In many oscine birds, song learning is affected by social variables, for example the behaviour of a tutor. This implies that both auditory and visual perceptual systems should be involved in the acquisition process. To examine whether and how particular visual stimuli can affect song acquisition, we tested the impact of a tutoring design in which the presentation of auditory stimuli (i.e. species-specific master songs) was paired with a well-defined nonauditory stimulus (i.e. stroboscope light flashes: Strobe regime). The subjects were male hand-reared nightingales, Luscinia megarhynchos. For controls, males were exposed to tutoring without a light stimulus (Control regime). The males' singing recorded 9 months later showed that the Strobe regime had enhanced the acquisition of song patterns. During this treatment birds had acquired more songs than during the Control regime; the observed increase in repertoire size was from 20 to 30% in most cases. Furthermore, the copy quality of imitations acquired during the Strobe regime was better than that of imitations developed from the Control regime, and this was due to a significant increase in the number of 'perfect' song copies. We conclude that these effects were mediated by an intrinsic component (e.g. attention or arousal) which specifically responded to the Strobe regime. Our findings also show that mechanisms of song learning are well prepared to process information from cross-modal perception. Thus, more detailed enquiries into stimulus complexes that are usually referred to as social variables are promising. Copyright 1999 The Association for the Study of Animal Behaviour.

  16. Automatic transperineal ultrasound probe positioning based on CT scan for image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Camps, S. M.; Verhaegen, F.; Paiva Fonesca, G.; de With, P. H. N.; Fontanarosa, D.

    2017-03-01

    Image interpretation is crucial during ultrasound image acquisition. A skilled operator is typically needed to verify if the correct anatomical structures are all visualized and with sufficient quality. The need for this operator is one of the major reasons why presently ultrasound is not widely used in radiotherapy workflows. To solve this issue, we introduce an algorithm that uses anatomical information derived from a CT scan to automatically provide the operator with a patient-specific ultrasound probe setup. The first application we investigated, for its relevance to radiotherapy, is 4D transperineal ultrasound image acquisition for prostate cancer patients. As initial test, the algorithm was applied on a CIRS multi-modality pelvic phantom. Probe setups were calculated in order to allow visualization of the prostate and adjacent edges of bladder and rectum, as clinically required. Five of the proposed setups were reproduced using a precision robotic arm and ultrasound volumes were acquired. A gel-filled probe cover was used to ensure proper acoustic coupling, while taking into account possible tilted positions of the probe with respect to the flat phantom surface. Visual inspection of the acquired volumes revealed that clinical requirements were fulfilled. Preliminary quantitative evaluation was also performed. The mean absolute distance (MAD) was calculated between actual anatomical structure positions and positions predicted by the CT-based algorithm. This resulted in a MAD of (2.8±0.4) mm for prostate, (2.5±0.6) mm for bladder and (2.8±0.6) mm for rectum. These results show that no significant systematic errors due to e.g. probe misplacement were introduced.

  17. What are they up to? The role of sensory evidence and prior knowledge in action understanding.

    PubMed

    Chambon, Valerian; Domenech, Philippe; Pacherie, Elisabeth; Koechlin, Etienne; Baraduc, Pierre; Farrer, Chlöé

    2011-02-18

    Explaining or predicting the behaviour of our conspecifics requires the ability to infer the intentions that motivate it. Such inferences are assumed to rely on two types of information: (1) the sensory information conveyed by movement kinematics and (2) the observer's prior expectations--acquired from past experience or derived from prior knowledge. However, the respective contribution of these two sources of information is still controversial. This controversy stems in part from the fact that "intention" is an umbrella term that may embrace various sub-types each being assigned different scopes and targets. We hypothesized that variations in the scope and target of intentions may account for variations in the contribution of visual kinematics and prior knowledge to the intention inference process. To test this hypothesis, we conducted four behavioural experiments in which participants were instructed to identify different types of intention: basic intentions (i.e. simple goal of a motor act), superordinate intentions (i.e. general goal of a sequence of motor acts), or social intentions (i.e. intentions accomplished in a context of reciprocal interaction). For each of the above-mentioned intentions, we varied (1) the amount of visual information available from the action scene and (2) participant's prior expectations concerning the intention that was more likely to be accomplished. First, we showed that intentional judgments depend on a consistent interaction between visual information and participant's prior expectations. Moreover, we demonstrated that this interaction varied according to the type of intention to be inferred, with participant's priors rather than perceptual evidence exerting a greater effect on the inference of social and superordinate intentions. The results are discussed by appealing to the specific properties of each type of intention considered and further interpreted in the light of a hierarchical model of action representation.

  18. Distributed and opposing effects of incidental learning in the human brain.

    PubMed

    Hall, Michelle G; Naughtin, Claire K; Mattingley, Jason B; Dux, Paul E

    2018-06-01

    Incidental learning affords a behavioural advantage when sensory information matches regularities that have previously been encountered. Previous studies have taken a focused approach by probing the involvement of specific candidate brain regions underlying incidentally acquired memory representations, as well as expectation effects on early sensory representations. Here, we investigated the broader extent of the brain's sensitivity to violations and fulfilments of expectations, using an incidental learning paradigm in which the contingencies between target locations and target identities were manipulated without participants' overt knowledge. Multivariate analysis of functional magnetic resonance imaging data was applied to compare the consistency of neural activity for visual events that the contingency manipulation rendered likely versus unlikely. We observed widespread sensitivity to expectations across frontal, temporal, occipital, and sub-cortical areas. These activation clusters showed distinct response profiles, such that some regions displayed more reliable activation patterns under fulfilled expectations, whereas others showed more reliable patterns when expectations were violated. These findings reveal that expectations affect multiple stages of information processing during visual decision making, rather than early sensory processing stages alone. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Interactions between visual and semantic processing during object recognition revealed by modulatory effects of age of acquisition.

    PubMed

    Urooj, Uzma; Cornelissen, Piers L; Simpson, Michael I G; Wheat, Katherine L; Woods, Will; Barca, Laura; Ellis, Andrew W

    2014-02-15

    The age of acquisition (AoA) of objects and their names is a powerful determinant of processing speed in adulthood, with early-acquired objects being recognized and named faster than late-acquired objects. Previous research using fMRI (Ellis et al., 2006. Traces of vocabulary acquisition in the brain: evidence from covert object naming. NeuroImage 33, 958-968) found that AoA modulated the strength of BOLD responses in both occipital and left anterior temporal cortex during object naming. We used magnetoencephalography (MEG) to explore in more detail the nature of the influence of AoA on activity in those two regions. Covert object naming recruited a network within the left hemisphere that is familiar from previous research, including visual, left occipito-temporal, anterior temporal and inferior frontal regions. Region of interest (ROI) analyses found that occipital cortex generated a rapid evoked response (~75-200 ms at 0-40 Hz) that peaked at 95 ms but was not modulated by AoA. That response was followed by a complex of later occipital responses that extended from ~300 to 850 ms and were stronger to early- than late-acquired items from ~325 to 675 ms at 10-20 Hz in the induced rather than the evoked component. Left anterior temporal cortex showed an evoked response that occurred significantly later than the first occipital response (~100-400 ms at 0-10 Hz with a peak at 191 ms) and was stronger to early- than late-acquired items from ~100 to 300 ms at 2-12 Hz. A later anterior temporal response from ~550 to 1050 ms at 5-20 Hz was not modulated by AoA. The results indicate that the initial analysis of object forms in visual cortex is not influenced by AoA. A fastforward sweep of activation from occipital and left anterior temporal cortex then results in stronger activation of semantic representations for early- than late-acquired objects. Top-down re-activation of occipital cortex by semantic representations is then greater for early than late acquired objects resulting in delayed modulation of the visual response. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. A visual approach to providing prognostic information to parents of children with retinoblastoma.

    PubMed

    Panton, Rachel L; Downie, Robert; Truong, Tran; Mackeen, Leslie; Kabene, Stefane; Yi, Qi-Long; Chan, Helen S L; Gallie, Brenda L

    2009-03-01

    Parents must rapidly assimilate complex information when a child is diagnosed with cancer. Education correlates with the ability to process and use medical information. Graphic tools aid reasoning and communicate complex ideas with precision and efficiency. We developed a graphic tool, DePICT (Disease-specific electronic Patient Illustrated Clinical Timeline), to visually display entire retinoblastoma treatment courses from real-time clinical data. We report retrospective evaluation of the effectiveness of DePICT to communicate risk and complexity of treatment to parents. We assembled DePICT graphics from multiple children on cards representing each stage of intraocular retinoblastoma. Forty-four parents completed a 14-item questionnaire to evaluate the understanding of retinoblastoma treatment and outcomes acquired from DePICT. As a proposed tool for informed consent, DePICT effectively communicated knowledge of complex medical treatment and risks, regardless of the education level. We identified multiple potential factors affecting parent comprehension of treatment complexity and risk. These include language proficiency (p=0.005) and age-related experience, as younger parents had higher education (p=0.021) but lower comprehension scores (p=0.011), regardless of first language. Provision of information at diagnosis concerning long-term treatment complexity helps parents of children with cancer. DePICT effectively transfers knowledge of treatments, risks, and prognosis in a manner that offsets parental educational disadvantages.

  1. The cost of making an eye movement: A direct link between visual working memory and saccade execution.

    PubMed

    Schut, Martijn J; Van der Stoep, Nathan; Postma, Albert; Van der Stigchel, Stefan

    2017-06-01

    To facilitate visual continuity across eye movements, the visual system must presaccadically acquire information about the future foveal image. Previous studies have indicated that visual working memory (VWM) affects saccade execution. However, the reverse relation, the effect of saccade execution on VWM load is less clear. To investigate the causal link between saccade execution and VWM, we combined a VWM task and a saccade task. Participants were instructed to remember one, two, or three shapes and performed either a No Saccade-, a Single Saccade- or a Dual (corrective) Saccade-task. The results indicate that items stored in VWM are reported less accurately if a single saccade-or a dual saccade-task is performed next to retaining items in VWM. Importantly, the loss of response accuracy for items retained in VWM by performing a saccade was similar to committing an extra item to VWM. In a second experiment, we observed no cost of executing a saccade for auditory working memory performance, indicating that executing a saccade exclusively taxes the VWM system. Our results suggest that the visual system presaccadically stores the upcoming retinal image, which has a similar VWM load as committing one extra item to memory and interferes with stored VWM content. After the saccade, the visual system can retrieve this item from VWM to evaluate saccade accuracy. Our results support the idea that VWM is a system which is directly linked to saccade execution and promotes visual continuity across saccades.

  2. Sociocultural Knowledge and Visual Re(-)Presentations of Black Masculinity and Community: Reading "The Wire" for Critical Multicultural Teacher Education

    ERIC Educational Resources Information Center

    Brown, Keffrelyn D.; Kraehe, Amelia

    2011-01-01

    In this article we consider the implications of using popular visual media as a pedagogic tool for helping teachers acquire critical sociocultural knowledge to work more effectively with students of color, particularly Black males. Drawing from a textual analysis (McKee 2001, 2003; Rose 2001) conducted in the critical visual studies tradition…

  3. Image denoising based on noise detection

    NASA Astrophysics Data System (ADS)

    Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen

    2018-03-01

    Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.

  4. Robotics and Virtual Reality for Cultural Heritage Digitization and Fruition

    NASA Astrophysics Data System (ADS)

    Calisi, D.; Cottefoglie, F.; D'Agostini, L.; Giannone, F.; Nenci, F.; Salonia, P.; Zaratti, M.; Ziparo, V. A.

    2017-05-01

    In this paper we present our novel approach for acquiring and managing digital models of archaeological sites, and the visualization techniques used to showcase them. In particular, we will demonstrate two technologies: our robotic system for digitization of archaeological sites (DigiRo) result of over three years of efforts by a group of cultural heritage experts, computer scientists and roboticists, and our cloud-based archaeological information system (ARIS). Finally we describe the viewers we developed to inspect and navigate the 3D models: a viewer for the web (ROVINA Web Viewer) and an immersive viewer for Virtual Reality (ROVINA VR Viewer).

  5. Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies.

    PubMed

    van Hoesel, Richard J M

    2015-04-01

    One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.

  6. An evaluation of data-driven motion estimation in comparison to the usage of external-surrogates in cardiac SPECT imaging

    PubMed Central

    Mukherjee, Joyeeta Mitra; Hutton, Brian F; Johnson, Karen L; Pretorius, P Hendrik; King, Michael A

    2014-01-01

    Motion estimation methods in single photon emission computed tomography (SPECT) can be classified into methods which depend on just the emission data (data-driven), or those that use some other source of information such as an external surrogate. The surrogate-based methods estimate the motion exhibited externally which may not correlate exactly with the movement of organs inside the body. The accuracy of data-driven strategies on the other hand is affected by the type and timing of motion occurrence during acquisition, the source distribution, and various degrading factors such as attenuation, scatter, and system spatial resolution. The goal of this paper is to investigate the performance of two data-driven motion estimation schemes based on the rigid-body registration of projections of motion-transformed source distributions to the acquired projection data for cardiac SPECT studies. Comparison is also made of six intensity based registration metrics to an external surrogate-based method. In the data-driven schemes, a partially reconstructed heart is used as the initial source distribution. The partially-reconstructed heart has inaccuracies due to limited angle artifacts resulting from using only a part of the SPECT projections acquired while the patient maintained the same pose. The performance of different cost functions in quantifying consistency with the SPECT projection data in the data-driven schemes was compared for clinically realistic patient motion occurring as discrete pose changes, one or two times during acquisition. The six intensity-based metrics studied were mean-squared difference (MSD), mutual information (MI), normalized mutual information (NMI), pattern intensity (PI), normalized cross-correlation (NCC) and entropy of the difference (EDI). Quantitative and qualitative analysis of the performance is reported using Monte-Carlo simulations of a realistic heart phantom including degradation factors such as attenuation, scatter and system spatial resolution. Further the visual appearance of motion-corrected images using data-driven motion estimates was compared to that obtained using the external motion-tracking system in patient studies. Pattern intensity and normalized mutual information cost functions were observed to have the best performance in terms of lowest average position error and stability with degradation of image quality of the partial reconstruction in simulations. In all patients, the visual quality of PI-based estimation was either significantly better or comparable to NMI-based estimation. Best visual quality was obtained with PI-based estimation in 1 of the 5 patient studies, and with external-surrogate based correction in 3 out of 5 patients. In the remaining patient study there was little motion and all methods yielded similar visual image quality. PMID:24107647

  7. Perceptual learning modifies untrained pursuit eye movements.

    PubMed

    Szpiro, Sarit F A; Spering, Miriam; Carrasco, Marisa

    2014-07-07

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. © 2014 ARVO.

  8. Perceptual learning modifies untrained pursuit eye movements

    PubMed Central

    Szpiro, Sarit F. A.; Spering, Miriam; Carrasco, Marisa

    2014-01-01

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. PMID:25002412

  9. Resolving human object recognition in space and time

    PubMed Central

    Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude

    2014-01-01

    A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. Here, we acquired human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) responses to 92 object images. Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively later. Using representational similarity analysis, we combine human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. We identified transient and persistent neural activities during object processing, with sources in V1 and IT., Finally, human MEG signals were correlated to single-unit responses in monkey IT. Together, our findings provide an integrated space- and time-resolved view of human object categorization during the first few hundred milliseconds of vision. PMID:24464044

  10. Compression-based integral curve data reuse framework for flow visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Fan; Bi, Chongke; Guo, Hanqi

    Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less

  11. Good Prospects: Ecological and Social Perspectives on Conforming, Creating, and Caring in Conversation

    ERIC Educational Resources Information Center

    Hodges, Bert H.

    2007-01-01

    Ecological approaches (e.g. [Gibson, J.J., 1979. "The Ecological Approach to Visual Perception." Houghton-Mifflin, Boston]) to psychology and language are selectively reviewed, focusing on social learning. Is social learning (e.g., acquiring language) a matter of conformity [Tomasello, M., 2006. "Acquiring linguistic…

  12. Strategy of Surgical Resection for Glioma Based on Intraoperative Functional Mapping and Monitoring

    PubMed Central

    TAMURA, Manabu; MURAGAKI, Yoshihiro; SAITO, Taiichi; MARUYAMA, Takashi; NITTA, Masayuki; TSUZUKI, Shunsuke; ISEKI, Hiroshi; OKADA, Yoshikazu

    2015-01-01

    A growing number of papers have pointed out the relationship between aggressive resection of gliomas and survival prognosis. For maximum resection, the current concept of surgical decision-making is in “information-guided surgery” using multimodal intraoperative information. With this, anatomical information from intraoperative magnetic resonance imaging (MRI) and navigation, functional information from brain mapping and monitoring, and histopathological information must all be taken into account in the new perspective for innovative minimally invasive surgical treatment of glioma. Intraoperative neurofunctional information such as neurophysiological functional monitoring takes the most important part in the process to acquire objective visual data during tumor removal and to integrate these findings as digitized data for intraoperative surgical decision-making. Moreover, the analysis of qualitative data and threshold-setting for quantitative data raise difficult issues in the interpretation and processing of each data type, such as determination of motor evoked potential (MEP) decline, underestimation in tractography, and judgments of patient response for neurofunctional mapping and monitoring during awake craniotomy. Neurofunctional diagnosis of false-positives in these situations may affect the extent of resection, while false-negatives influence intra- and postoperative complication rates. Additionally, even though the various intraoperative visualized data from multiple sources contribute significantly to the reliability of surgical decisions when the information is integrated and provided, it is not uncommon for individual pieces of information to convey opposing suggestions. Such conflicting pieces of information facilitate higher-order decision-making that is dependent on the policies of the facility and the priorities of the patient, as well as the availability of the histopathological characteristics from resected tissue. PMID:26185825

  13. Sighted and visually impaired students’ perspectives of illustrations, diagrams and drawings in school science

    PubMed Central

    McDonald, Celia; Rodrigues, Susan

    2016-01-01

    Background: In this paper we report on the views of students with and without visual impairments on the use of illustrations, diagrams and drawings (IDD) in science lessons. Method: Our findings are based on data gathered through a brief questionnaire completed by a convenience sample of students prior to trialling new resource material. The questionnaire sought to understand the students’ views about using IDD in science lessons. The classes involved in the study included one class from a primary school, five classes from a secondary school and one class from a school for visually impaired students. Results: Approximately 20% of the participants thought that the diagrams were boring and just under half (48%) of the total sample (regardless of whether they were sighted or visually impaired) did not think diagrams were easy to use. Only 14% of the participants felt that repeated encounters with the same diagrams made the diagrams easy to understand. Unlike sighted students who can ‘flit’ across diagrams, a visually impaired student may only see or touch a small part of the diagram at a time so for them ‘fliting’ could result in loss of orientation with the diagram. Conclusions: Treating sighted and visually impaired pupils equally is different to treating them identically. Sighted students incidentally learn how to interpret visual information from a young age. Students who acquire sight loss need to learn the different rules associated with reading tactile diagrams, or large print and those who are congenitally blind do not have visual memories to rely upon. PMID:27918598

  14. Is eye damage caused by stereoscopic displays?

    NASA Astrophysics Data System (ADS)

    Mayer, Udo; Neumann, Markus D.; Kubbat, Wolfgang; Landau, Kurt

    2000-05-01

    A normal developing child will achieve emmetropia in youth and maintain it. Thereby cornea, lens and axial length of the eye grow astonishingly coordinated. In the last years research has evidenced that this coordinated growing process is a visually controlled closed loop. The mechanism has been studied particularly in animals. It was found that the growth of the axial length of the eyeball is controlled by image focus information from the retina. It was shown that maladjustment can occur by this visually-guided growth control mechanism that result in ametropia. Thereby it has been proven that e.g. short-sightedness is not only caused by heredity, but is acquired under certain visual conditions. It is shown that these conditions are similar to the conditions of viewing stereoscopic displays where the normal accommodation convergence coupling is disjoint. An evaluation is given of the potential of damaging the eyes by viewing stereoscopic displays. Concerning this, different viewing methods for stereoscopic displays are evaluated. Moreover, clues are given how the environment and display conditions shall be set and what users shall be chosen to minimize the risk of eye damages.

  15. Mapping white-matter functional organization at rest and during naturalistic visual perception.

    PubMed

    Marussich, Lauren; Lu, Kun-Han; Wen, Haiguang; Liu, Zhongming

    2017-02-01

    Despite the wide applications of functional magnetic resonance imaging (fMRI) to mapping brain activation and connectivity in cortical gray matter, it has rarely been utilized to study white-matter functions. In this study, we investigated the spatiotemporal characteristics of fMRI data within the white matter acquired from humans both in the resting state and while watching a naturalistic movie. By using independent component analysis and hierarchical clustering, resting-state fMRI data in the white matter were de-noised and decomposed into spatially independent components, which were further assembled into hierarchically organized axonal fiber bundles. Interestingly, such components were partly reorganized during natural vision. Relative to resting state, the visual task specifically induced a stronger degree of temporal coherence within the optic radiations, as well as significant correlations between the optic radiations and multiple cortical visual networks. Therefore, fMRI contains rich functional information about the activity and connectivity within white matter at rest and during tasks, challenging the conventional practice of taking white-matter signals as noise or artifacts. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Virtual Guidance Ultrasound: A Tool to Obtain Diagnostic Ultrasound for Remote Environments

    NASA Technical Reports Server (NTRS)

    Caine,Timothy L.; Martin David S.; Matz, Timothy; Lee, Stuart M. C.; Stenger, Michael B.; Platts, Steven H.

    2012-01-01

    Astronauts currently acquire ultrasound images on the International Space Station with the assistance of real-time remote guidance from an ultrasound expert in Mission Control. Remote guidance will not be feasible when significant communication delays exist during exploration missions beyond low-Earth orbit. For example, there may be as much as a 20- minute delay in communications between the Earth and Mars. Virtual-guidance, a pre-recorded audio-visual tutorial viewed in real-time, is a viable modality for minimally trained scanners to obtain diagnostically-adequate images of clinically relevant anatomical structures in an autonomous manner. METHODS: Inexperienced ultrasound operators were recruited to perform carotid artery (n = 10) and ophthalmic (n = 9) ultrasound examinations using virtual guidance as their only instructional tool. In the carotid group, each each untrained operator acquired two-dimensional, pulsed, and color Doppler of the carotid artery. In the ophthalmic group, operators acquired representative images of the anterior chamber of the eye, retina, optic nerve, and nerve sheath. Ultrasound image quality was evaluated by independent imaging experts. RESULTS: Eight of the 10 carotid studies were judged to be diagnostically adequate. With one exception the quality of all the ophthalmic images were adequate to excellent. CONCLUSION: Diagnostically-adequate carotid and ophthalmic ultrasound examinations can be obtained by untrained operators with instruction only from an audio/video tutorial viewed in real time while scanning. This form of quick-response-guidance, can be developed for other ultrasound examinations, represents an opportunity to acquire important medical and scientific information for NASA flight surgeons and researchers when trained medical personnel are not present. Further, virtual guidance will allow untrained personnel to autonomously obtain important medical information in remote locations on Earth where communication is difficult or absent.

  17. Enhanced visualization of peripheral retinal vasculature with wavefront sensorless adaptive optics OCT angiography in diabetic patients

    PubMed Central

    Polans, James; Cunefare, David; Cole, Eli; Keller, Brenton; Mettu, Priyatham S.; Cousins, Scott W.; Allingham, Michael J.; Izatt, Joseph A.; Farsiu, Sina

    2017-01-01

    Optical coherence tomography angiography (OCTA) is a promising technique for non-invasive visualization of vessel networks in the human eye. We debut a system capable of acquiring wide field-of-view (>70°) OCT angiograms without mosaicking. Additionally, we report on enhancing the visualization of peripheral microvasculature using wavefront sensorless adaptive optics (WSAO). We employed a fast WSAO algorithm that enabled wavefront correction in <2 seconds by iterating the mirror shape at the speed of OCT B-scans rather than volumes. Also, we contrasted ~7° field-of-view OCTA angiograms acquired in the periphery with and without WSAO correction. On average, WSAO improved the sharpness of microvasculature by 65% in healthy and 38% in diseased eyes. Preliminary observations demonstrated that the location of 7° images could be identified directly from the wide field-of-view angiogram. A pilot study on a normal subject and patients with diabetic retinopathy showed the impact of utilizing WSAO for OCTA when visualizing peripheral vasculature pathologies. PMID:28059209

  18. Effects of Digitization and JPEG Compression on Land Cover Classification Using Astronaut-Acquired Orbital Photographs

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Webb, Edward L.; Evangelista, Arlene

    2000-01-01

    Studies that utilize astronaut-acquired orbital photographs for visual or digital classification require high-quality data to ensure accuracy. The majority of images available must be digitized from film and electronically transferred to scientific users. This study examined the effect of scanning spatial resolution (1200, 2400 pixels per inch [21.2 and 10.6 microns/pixel]), scanning density range option (Auto, Full) and compression ratio (non-lossy [TIFF], and lossy JPEG 10:1, 46:1, 83:1) on digital classification results of an orbital photograph from the NASA - Johnson Space Center archive. Qualitative results suggested that 1200 ppi was acceptable for visual interpretive uses for major land cover types. Moreover, Auto scanning density range was superior to Full density range. Quantitative assessment of the processing steps indicated that, while 2400 ppi scanning spatial resolution resulted in more classified polygons as well as a substantially greater proportion of polygons < 0.2 ha, overall agreement between 1200 ppi and 2400 ppi was quite high. JPEG compression up to approximately 46:1 also did not appear to have a major impact on quantitative classification characteristics. We conclude that both 1200 and 2400 ppi scanning resolutions are acceptable options for this level of land cover classification, as well as a compression ratio at or below approximately 46:1. Auto range density should always be used during scanning because it acquires more of the information from the film. The particular combination of scanning spatial resolution and compression level will require a case-by-case decision and will depend upon memory capabilities, analytical objectives and the spatial properties of the objects in the image.

  19. Vision, Voice, and Intertribal Metanarrative: The American Indian Visual-Rhetorical Tradition and Leslie Marmon Silko's Almanac of the Dead

    ERIC Educational Resources Information Center

    Roppolo, Kimberly

    2007-01-01

    American Indian cultures tend to be right hemispheric because of the ways in which they acquire knowledge. Over the thousands of years that American Indian peoples have lived in this hemisphere, strong visual rhetorics were developed, because of this tendency to engage in visual thinking and the socioeconomic need to communicate with others who…

  20. Imaging deep skeletal muscle structure using a high-sensitivity ultrathin side-viewing optical coherence tomography needle probe

    PubMed Central

    Yang, Xiaojie; Lorenser, Dirk; McLaughlin, Robert A.; Kirk, Rodney W.; Edmond, Matthew; Simpson, M. Cather; Grounds, Miranda D.; Sampson, David D.

    2013-01-01

    We have developed an extremely miniaturized optical coherence tomography (OCT) needle probe (outer diameter 310 µm) with high sensitivity (108 dB) to enable minimally invasive imaging of cellular structure deep within skeletal muscle. Three-dimensional volumetric images were acquired from ex vivo mouse tissue, examining both healthy and pathological dystrophic muscle. Individual myofibers were visualized as striations in the images. Degradation of cellular structure in necrotic regions was seen as a loss of these striations. Tendon and connective tissue were also visualized. The observed structures were validated against co-registered hematoxylin and eosin (H&E) histology sections. These images of internal cellular structure of skeletal muscle acquired with an OCT needle probe demonstrate the potential of this technique to visualize structure at the microscopic level deep in biological tissue in situ. PMID:24466482

  1. Age-related morphological changes of the dermal matrix in human skin documented in vivo by multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Hequn; Shyr, Thomas; Fevola, Michael J.; Cula, Gabriela Oana; Stamatas, Georgios N.

    2018-03-01

    Two-photon fluorescence (TPF) and second harmonic generation (SHG) microscopy provide direct visualization of the skin dermal fibers in vivo. A typical method for analyzing TPF/SHG images involves averaging the image intensity and therefore disregarding the spatial distribution information. The goal of this study is to develop an algorithm to document age-related effects of the dermal matrix. TPF and SHG images were acquired from the upper inner arm, volar forearm, and cheek of female volunteers of two age groups: 20 to 30 and 60 to 80 years of age. The acquired images were analyzed for parameters relating to collagen and elastin fiber features, such as orientation and density. Both collagen and elastin fibers showed higher anisotropy in fiber orientation for the older group. The greatest difference in elastin fiber anisotropy between the two groups was found for the upper inner arm site. Elastin fiber density increased with age, whereas collagen fiber density decreased with age. The proposed analysis considers the spatial information inherent to the TPF and SHG images and provides additional insights into how the dermal fiber structure is affected by the aging process.

  2. Ex vivo detection of macrophages in atherosclerotic plaques using intravascular ultrasonic-photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Quang Bui, Nhat; Hlaing, Kyu Kyu; Lee, Yong Wook; Kang, Hyun Wook; Oh, Junghwan

    2017-01-01

    Macrophages are excellent imaging targets for detecting atherosclerotic plaques as they are involved in all the developmental stages of atherosclerosis. However, no imaging technique is currently capable of visualizing macrophages inside blood vessel walls. The current study develops an intravascular ultrasonic-photoacoustic (IVUP) imaging system combined with indocyanine green (ICG) as a contrast agent to provide morphological and compositional information about the targeted samples. Both tissue-mimicking vessel phantoms and atherosclerotic plaque-mimicking porcine arterial tissues are used to demonstrate the feasibility of mapping macrophages labeled with ICG by endoscopically applying the proposed hybrid technique. A delay pulse triggering technique is able to sequentially acquire photoacoustic (PA) and ultrasound (US) signals from a single scan without using any external devices. The acquired PA and US signals are used to reconstruct 2D cross-sectional and 3D volumetric images of the entire tissue with the ICG-loaded macrophages injected. Due to high imaging contrast and sensitivity, the IVUP imaging vividly reveals structural information and detects the spatial distribution of the ICG-labeled macrophages inside the samples. ICG-assisted IVUP imaging can be a feasible imaging modality for the endoscopic detection of atherosclerotic plaques.

  3. Impaired holistic processing of unfamiliar individual faces in acquired prosopagnosia.

    PubMed

    Ramon, Meike; Busigny, Thomas; Rossion, Bruno

    2010-03-01

    Prosopagnosia is an impairment at individualizing faces that classically follows brain damage. Several studies have reported observations supporting an impairment of holistic/configural face processing in acquired prosopagnosia. However, this issue may require more compelling evidence as the cases reported were generally patients suffering from integrative visual agnosia, and the sensitivity of the paradigms used to measure holistic/configural face processing in normal individuals remains unclear. Here we tested a well-characterized case of acquired prosopagnosia (PS) with no object recognition impairment, in five behavioral experiments (whole/part and composite face paradigms with unfamiliar faces). In all experiments, for normal observers we found that processing of a given facial feature was affected by the location and identity of the other features in a whole face configuration. In contrast, the patient's results over these experiments indicate that she encodes local facial information independently of the other features embedded in the whole facial context. These observations and a survey of the literature indicate that abnormal holistic processing of the individual face may be a characteristic hallmark of prosopagnosia following brain damage, perhaps with various degrees of severity. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  4. The Effect of Visual Experience on Perceived Haptic Verticality When Tilted in the Roll Plane

    PubMed Central

    Cuturi, Luigi F.; Gori, Monica

    2017-01-01

    The orientation of the body in space can influence perception of verticality leading sometimes to biases consistent with priors peaked at the most common head and body orientation, that is upright. In this study, we investigate haptic perception of verticality in sighted individuals and early and late blind adults when tilted counterclockwise in the roll plane. Participants were asked to perform a stimulus orientation discrimination task with their body tilted to their left ear side 90° relative to gravity. Stimuli were presented by using a motorized haptic bar. In order to test whether different reference frames relative to the head influenced perception of verticality, we varied the position of the stimulus on the body longitudinal axis. Depending on the stimulus position sighted participants tended to have biases away or toward their body tilt. Visually impaired individuals instead show a different pattern of verticality estimations. A bias toward head and body tilt (i.e., Aubert effect) was observed in late blind individuals. Interestingly, no strong biases were observed in early blind individuals. Overall, these results posit visual sensory information to be fundamental in influencing the haptic readout of proprioceptive and vestibular information about body orientation relative to gravity. The acquisition of an idiotropic vector signaling the upright might take place through vision during development. Regarding early blind individuals, independent spatial navigation experience likely enhanced by echolocation behavior might have a role in such acquisition. In the case of participants with late onset blindness, early experience of vision might lead them to anchor their visually acquired priors to the haptic modality with no disambiguation between head and body references as observed in sighted individuals (Fraser et al., 2015). With our study, we aim to investigate haptic perception of gravity direction in unusual body tilts when vision is absent due to visual impairment. Insofar, our findings throw light on the influence of proprioceptive/vestibular sensory information on haptic perceived verticality in blind individuals showing how this phenomenon is affected by visual experience. PMID:29270109

  5. Research Trend Visualization by MeSH Terms from PubMed.

    PubMed

    Yang, Heyoung; Lee, Hyuck Jai

    2018-05-30

    Motivation : PubMed is a primary source of biomedical information comprising search tool function and the biomedical literature from MEDLINE which is the US National Library of Medicine premier bibliographic database, life science journals and online books. Complimentary tools to PubMed have been developed to help the users search for literature and acquire knowledge. However, these tools are insufficient to overcome the difficulties of the users due to the proliferation of biomedical literature. A new method is needed for searching the knowledge in biomedical field. Methods : A new method is proposed in this study for visualizing the recent research trends based on the retrieved documents corresponding to a search query given by the user. The Medical Subject Headings (MeSH) are used as the primary analytical element. MeSH terms are extracted from the literature and the correlations between them are calculated. A MeSH network, called MeSH Net, is generated as the final result based on the Pathfinder Network algorithm. Results : A case study for the verification of proposed method was carried out on a research area defined by the search query (immunotherapy and cancer and "tumor microenvironment"). The MeSH Net generated by the method is in good agreement with the actual research activities in the research area (immunotherapy). Conclusion : A prototype application generating MeSH Net was developed. The application, which could be used as a "guide map for travelers", allows the users to quickly and easily acquire the knowledge of research trends. Combination of PubMed and MeSH Net is expected to be an effective complementary system for the researchers in biomedical field experiencing difficulties with search and information analysis.

  6. Online Kidney Position Verification Using Non-Contrast Radiographs on a Linear Accelerator with on Board KV X-Ray Imaging Capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willis, David J.; Kron, Tomas; Hubbard, Patricia

    2009-01-01

    The kidneys are dose-limiting organs in abdominal radiotherapy. Kilovoltage (kV) radiographs can be acquired using on-board imager (OBI)-equipped linear accelerators with better soft tissue contrast and lower radiation doses than conventional portal imaging. A feasibility study was conducted to test the suitability of anterior-posterior (AP) non-contrast kV radiographs acquired at treatment time for online kidney position verification. Anthropomorphic phantoms were used to evaluate image quality and radiation dose. Institutional Review Board approval was given for a pilot study that enrolled 5 adults and 5 children. Customized digitally reconstructed radiographs (DRRs) were generated to provide a priori information on kidney shapemore » and position. Radiotherapy treatment staff performed online evaluation of kidney visibility on OBI radiographs. Kidney dose measured in a pediatric anthropomorphic phantom was 0.1 cGy for kV imaging and 1.7 cGy for MV imaging. Kidneys were rated as well visualized in 60% of patients (90% confidence interval, 34-81%). The likelihood of visualization appears to be influenced by the relative AP separation of the abdomen and kidneys, the axial profile of the kidneys, and their relative contrast with surrounding structures. Online verification of kidney position using AP non-contrast kV radiographs on an OBI-equipped linear accelerator appears feasible for patients with suitable abdominal anatomy. Kidney position information provided is limited to 2-dimensional 'snapshots,' but this is adequate in some clinical situations and potentially advantageous in respiratory-correlated treatments. Successful clinical implementation requires customized partial DRRs, appropriate imaging parameters, and credentialing of treatment staff.« less

  7. LC-IM-TOF Instrument Control & Data Visualization Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-05-12

    Liquid Chromatography-Ion Mobility-time of Flight Instrument Control and Data Visualization software is designed to control instrument voltages for the Ion Mobility drift tube. It collects and stores information collected from the Agilent TOF instrument and analyses/displays the ion intensity information acquired. The software interface can be split into 3 categories -- Instrument Settings/Controls, Data Acquisition, and Viewer. The Instrument Settings/Controls prepares the instrument for Data Acquisition. The Viewer contains common objects that are used by Instrument Settings/Controls and Data Acquisition. Intensity information is collected in 1 nanosec bins and separated by TOF pulses called scans. A collection of scans aremore » stored side by side making up an accumulation. In order for the computer to keep up with the stream of data, 30-50 accumulations are commonly summed into a single frame. A collection of frames makes up an experiment. The Viewer software then takes the experiment and presents the data in several possible ways, each frame can be viewed in TOF bins or m/z (mass to charge ratio). The experiment can be viewed frame by frame, merging several frames, or by viewing the peak chromatogram. The user can zoom into the data, export data, and/or animate frames. Additional features include calibration of the data and even post-processing multiplexed data.« less

  8. Visualizing Mars Using Virtual Reality: A State of the Art Mapping Technique Used on Mars Pathfinder

    NASA Technical Reports Server (NTRS)

    Stoker, C.; Zbinden, E.; Blackmon, T.; Nguyen, L.

    1999-01-01

    We describe an interactive terrain visualization system which rapidly generates and interactively displays photorealistic three-dimensional (3-D) models produced from stereo images. This product, first demonstrated in Mars Pathfinder, is interactive, 3-D, and can be viewed in an immersive display which qualifies it for the name Virtual Reality (VR). The use of this technology on Mars Pathfinder was the first use of VR for geologic analysis. A primary benefit of using VR to display geologic information is that it provides an improved perception of depth and spatial layout of the remote site. The VR aspect of the display allows an operator to move freely in the environment, unconstrained by the physical limitations of the perspective from which the data were acquired. Virtual Reality offers a way to archive and retrieve information in a way that is intuitively obvious. Combining VR models with stereo display systems can give the user a sense of presence at the remote location. The capability, to interactively perform measurements from within the VR model offers unprecedented ease in performing operations that are normally time consuming and difficult using other techniques. Thus, Virtual Reality can be a powerful a cartographic tool. Additional information is contained in the original extended abstract.

  9. High throughput phenotyping of tomato spotted wilt disease in peanuts using unmanned aerial systems and multispectral imaging

    USDA-ARS?s Scientific Manuscript database

    The amount of visible and near infrared light reflected by plants varies depending on their health. In this study, multispectral images were acquired by quadcopter for detecting tomato spot wilt virus amongst twenty genetic varieties of peanuts. The plants were visually assessed to acquire ground ...

  10. Decoding the direction of imagined visual motion using 7 T ultra-high field fMRI

    PubMed Central

    Emmerling, Thomas C.; Zimmermann, Jan; Sorger, Bettina; Frost, Martin A.; Goebel, Rainer

    2016-01-01

    There is a long-standing debate about the neurocognitive implementation of mental imagery. One form of mental imagery is the imagery of visual motion, which is of interest due to its naturalistic and dynamic character. However, so far only the mere occurrence rather than the specific content of motion imagery was shown to be detectable. In the current study, the application of multi-voxel pattern analysis to high-resolution functional data of 12 subjects acquired with ultra-high field 7 T functional magnetic resonance imaging allowed us to show that imagery of visual motion can indeed activate the earliest levels of the visual hierarchy, but the extent thereof varies highly between subjects. Our approach enabled classification not only of complex imagery, but also of its actual contents, in that the direction of imagined motion out of four options was successfully identified in two thirds of the subjects and with accuracies of up to 91.3% in individual subjects. A searchlight analysis confirmed the local origin of decodable information in striate and extra-striate cortex. These high-accuracy findings not only shed new light on a central question in vision science on the constituents of mental imagery, but also show for the first time that the specific sub-categorical content of visual motion imagery is reliably decodable from brain imaging data on a single-subject level. PMID:26481673

  11. Summary of NASA Langley's pilot scan behavior research

    NASA Technical Reports Server (NTRS)

    Spady, A. A., Jr.; Harris, R. L., Sr.

    1983-01-01

    The present investigation is concerned with the information acquired in a series of basic studies designed to obtain an understanding of the pilot's scanning behavior. In the studies, use was made of an oculometer system which operates by shining a beam of collimated infrared light at the subject's eyes. A number of oculometer software modifications have been made to make the oculometer user-friendly and versatile. Scanning is found to be a subconscious conditioned activity. The conditioned activity of scanning is different for each pilot. There are also slight variations between test runs for the same conditions for the same pilot. This indicates that scanning is situation dependent. Attention is given to the rate of information transfer, the possibility that scanning can be disrupted, the visual approach look-point, and workload sensitive measures.

  12. Expansion of the visual angle of a car rear-view image via an image mosaic algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng

    2015-05-01

    The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear-view image in all-weather conditions.

  13. Integration of spectral domain optical coherence tomography with microperimetry generates unique datasets for the simultaneous identification of visual function and retinal structure in ophthalmological applications

    NASA Astrophysics Data System (ADS)

    Koulen, Peter; Gallimore, Gary; Vincent, Ryan D.; Sabates, Nelson R.; Sabates, Felix N.

    2011-06-01

    Conventional perimeters are used routinely in various eye disease states to evaluate the central visual field and to quantitatively map sensitivity. However, standard automated perimetry proves difficult for retina and specifically macular disease due to the need for central and steady fixation. Advances in instrumentation have led to microperimetry, which incorporates eye tracking for placement of macular sensitivity values onto an image of the macular fundus thus enabling a precise functional and anatomical mapping of the central visual field. Functional sensitivity of the retina can be compared with the observed structural parameters that are acquired with high-resolution spectral domain optical coherence tomography and by integration of scanning laser ophthalmoscope-driven imaging. Findings of the present study generate a basis for age-matched comparison of sensitivity values in patients with macular pathology. Microperimetry registered with detailed structural data performed before and after intervention treatments provides valuable information about macular function, disease progression and treatment success. This approach also allows for the detection of disease or treatment related changes in retinal sensitivity when visual acuity is not affected and can drive the decision making process in choosing different treatment regimens and guiding visual rehabilitation. This has immediate relevance for applications in central retinal vein occlusion, central serous choroidopathy, age-related macular degeneration, familial macular dystrophy and several other forms of retina related visual disability.

  14. Detecting fractal power-law long-range dependence in pre-sliced cooked pork ham surface intensity patterns using Detrended Fluctuation Analysis.

    PubMed

    Valous, Nektarios A; Drakakis, Konstantinos; Sun, Da-Wen

    2010-10-01

    The visual texture of pork ham slices reveals information about the different qualities and perceived image heterogeneity, which is encapsulated as spatial variations in geometry and spectral characteristics. Detrended Fluctuation Analysis (DFA) detects long-range correlations in nonstationary spatial sequences, by a self-similarity scaling exponent alpha. In the current work, the aim is to investigate the usefulness of alpha, using different colour channels (R, G, B, L*, a*, b*, H, S, V, and Grey), as a quantitative descriptor of visual texture in sliced ham surface patterns for the detection of long-range correlations in unidimensional spatial series of greyscale intensity pixel values at 0 degrees , 30 degrees , 45 degrees , 60 degrees , and 90 degrees rotations. Images were acquired from three qualities of pre-sliced pork ham, typically consumed in Ireland (200 slices per quality). Results indicated that the DFA approach can be used to characterize and quantify the textural appearance of the three ham qualities, for different image orientations, with a global scaling exponent. The spatial series extracted from the ham images display long-range dependence, indicating an average behaviour around 1/f-noise. Results indicate that alpha has a universal character in quantifying the visual texture of ham surface intensity patterns, with no considerable crossovers that alter the behaviour of the fluctuations. Fractal correlation properties can thus be a useful metric for capturing information embedded in the visual texture of hams. Copyright (c) 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.

  15. Urban photogrammetric data base for multi-purpose cadastral-based information systems: the Riyadh city case

    NASA Astrophysics Data System (ADS)

    Al-garni, Abdullah M.

    Urban information systems are economic resources that can benefit decision makers in the planning, development, and management of urban projects and resources. In this research, a conceptual model-based prototype Urban Geographic Information System (UGIS) is developed. The base maps used in developing the system and acquiring visual attributes are obtained from aerial photographs. The system is a multi-purpose parcel-based one that can serve many urban applications such as public utilities, health centres, schools, population estimation, road engineering and maintenance, and many others. A modern region in the capital city of Saudi Arabia is used for the study. The developed model is operational for one urban application (population estimation) and is tested for that particular application. The results showed that the system has a satisfactory accuracy and that it may well be promising for other similar urban applications in countries with similar demographic and social characteristics.

  16. Gaps in patient care practices to prevent hospital-acquired delirium

    PubMed Central

    Alagiakrishnan, Kannayiram; Marrie, Thomas; Rolfson, Darryl; Coke, William; Camicioli, Richard; Duggan, D’Arcy; Launhardt, Bonnie; Fisher, Bruce; Gordon, Debbie; Hervas-Malo, Marilou; Magee, Bernice; Wiens, Cheryl

    2009-01-01

    ABSTRACT OBJECTIVE To evaluate the current patient care practices that address the predisposing and precipitating factors contributing to the prevention of hospital-acquired delirium in the elderly. DESIGN Prospective cohort (observational) study. PARTICIPANTS Patients 65 years of age and older who were admitted to medical teaching units at the University of Alberta Hospital in Edmonton over a period of 7 months and who were at risk of delirium. SETTING Medical teaching units at the University of Alberta. MAIN OUTCOME MEASURES Demographic data and information on predisposing factors for hospital-acquired delirium were obtained for all patients. Documented clinical practices that likely prevent common precipitants of delirium were also recorded. RESULTS Of the 132 patients enrolled, 20 (15.2%) developed hospital-acquired delirium. At the time of admission several predisposing factors were not documented (eg, possible cognitive impairment 16 [12%], visual impairment 52 [39.4%], and functional status of activities of daily living 99 [75.0%]). Recorded precipitating factors included catheter use, screening for dehydration, and medications. Catheters were used in 35 (26.5%) patients, and fluid intake-and-output charting assessed dehydration in 57 (43.2%) patients. At the time of admission there was no documentation of hearing status in 69 (52.3%) patients and aspiration risk in 104 (78.8%) patients. After admission, reorientation measures were documented in only 16 (12.1%) patients. Although all patients had brief mental status evaluations performed once daily, this was not noted to occur twice daily (which would provide important information about fluctuation of mental status) and there was no formal attention span testing. In this study, hospital-acquired delirium was also associated with increased mortality (P < .004), increased length of stay (P < .007), and increased institutionalization (P < .027). CONCLUSION Gaps were noted in patient care practices that might contribute to hospital-acquired delirium and also in measures to identify the development of delirium at an earlier stage. Effort should be made to educate health professionals to identify the predisposing and precipitating factors, and to screen for delirium. This might improve the prevention of delirium. PMID:19826141

  17. Fast interactive exploration of 4D MRI flow data

    NASA Astrophysics Data System (ADS)

    Hennemuth, A.; Friman, O.; Schumann, C.; Bock, J.; Drexl, J.; Huellebrand, M.; Markl, M.; Peitgen, H.-O.

    2011-03-01

    1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing times.

  18. North Dakota Visual Arts Content Standards.

    ERIC Educational Resources Information Center

    Shaw-Elgin, Linda; Kurkowski, Bob; Jackson, Jane; Syvertson, Karen; Whitney, Linda; Riehl, Lori

    The standards in this document are based on previous North Dakota standards, national standards, and standards from other states. The purpose of these standards is to provide a framework from which teachers in North Dakota can design their visual arts curriculum. The expectations for the knowledge and skills that students should acquire are…

  19. Three-dimensional venous visualization with phase-lag computed tomography angiography for reconstructive microsurgery.

    PubMed

    Sakakibara, Shunsuke; Onishi, Hiroyuki; Hashikawa, Kazunobu; Akashi, Masaya; Sakakibara, Akiko; Nomura, Tadashi; Terashi, Hiroto

    2015-05-01

    Most free flap reconstruction complications involve vascular compromise. Evaluation of vascular anatomy provides considerable information that can potentially minimize these complications. Previous reports have shown that contrast-enhanced computed tomography is effective for understanding three-dimensional arterial anatomy. However, most vascular complications result from venous thromboses, making imaging of venous anatomy highly desirable. The phase-lag computed tomography angiography (pl-CTA) technique involves 64-channel (virtually, 128-channel) multidetector CT and is used to acquire arterial images using conventional CTA. Venous images are three-dimensionally reconstructed using a subtraction technique involving combined venous phase and arterial phase images, using a computer workstation. This technique was used to examine 48 patients (12 lower leg reconstructions, 34 head and neck reconstructions, and 2 upper extremity reconstructions) without complications. The pl-CTA technique can be used for three-dimensional visualization of peripheral veins measuring approximately 1 mm in diameter. The pl-CTA information was especially helpful for secondary free flap reconstructions in the head and neck region after malignant tumor recurrence. In such cases, radical dissection of the neck was performed as part of the first operation, and many vessels, including veins, were resected and used in the first free-tissue transfer. The pl-CTA images also allowed visualization of varicose changes in the lower leg region and helped us avoid selecting those vessels for anastomosis. Thus, the pl-CTA-derived venous anatomy information was useful for exact evaluations during the planning of free-tissue transfers. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  20. Development of a mobile emergency patient information and imaging communication system based on CDMA-1X EVDO

    NASA Astrophysics Data System (ADS)

    Yang, Keon Ho; Jung, Haijo; Kang, Won-Suk; Jang, Bong Mun; Kim, Joong Il; Han, Dong Hoon; Yoo, Sun-Kook; Yoo, Hyung-Sik; Kim, Hee-Joung

    2006-03-01

    The wireless mobile service with a high bit rate using CDMA-1X EVDO is now widely used in Korea. Mobile devices are also increasingly being used as the conventional communication mechanism. We have developed a web-based mobile system that communicates patient information and images, using CDMA-1X EVDO for emergency diagnosis. It is composed of a Mobile web application system using the Microsoft Windows 2003 server and an internet information service. Also, a mobile web PACS used for a database managing patient information and images was developed by using Microsoft access 2003. A wireless mobile emergency patient information and imaging communication system is developed by using Microsoft Visual Studio.NET, and JPEG 2000 ActiveX control for PDA phone was developed by using the Microsoft Embedded Visual C++. Also, the CDMA-1X EVDO is used for connections between mobile web servers and the PDA phone. This system allows fast access to the patient information database, storing both medical images and patient information anytime and anywhere. Especially, images were compressed into a JPEG2000 format and transmitted from a mobile web PACS inside the hospital to the radiologist using a PDA phone located outside the hospital. Also, this system shows radiological images as well as physiological signal data, including blood pressure, vital signs and so on, in the web browser of the PDA phone so radiologists can diagnose more effectively. Also, we acquired good results using an RW-6100 PDA phone used in the university hospital system of the Sinchon Severance Hospital in Korea.

  1. Localized direction selective responses in the dendrites of visual interneurons of the fly

    PubMed Central

    2010-01-01

    Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983

  2. Development of a Web-Based Visualization Platform for Climate Research Using Google Earth

    NASA Technical Reports Server (NTRS)

    Sun, Xiaojuan; Shen, Suhung; Leptoukh, Gregory G.; Wang, Panxing; Di, Liping; Lu, Mingyue

    2011-01-01

    Recently, it has become easier to access climate data from satellites, ground measurements, and models from various data centers, However, searching. accessing, and prc(essing heterogeneous data from different sources are very tim -consuming tasks. There is lack of a comprehensive visual platform to acquire distributed and heterogeneous scientific data and to render processed images from a single accessing point for climate studies. This paper. documents the design and implementation of a Web-based visual, interoperable, and scalable platform that is able to access climatological fields from models, satellites, and ground stations from a number of data sources using Google Earth (GE) as a common graphical interface. The development is based on the TCP/IP protocol and various data sharing open sources, such as OPeNDAP, GDS, Web Processing Service (WPS), and Web Mapping Service (WMS). The visualization capability of integrating various measurements into cE extends dramatically the awareness and visibility of scientific results. Using embedded geographic information in the GE, the designed system improves our understanding of the relationships of different elements in a four dimensional domain. The system enables easy and convenient synergistic research on a virtual platform for professionals and the general public, gr$tly advancing global data sharing and scientific research collaboration.

  3. Visual brain activity patterns classification with simultaneous EEG-fMRI: A multimodal approach.

    PubMed

    Ahmad, Rana Fayyaz; Malik, Aamir Saeed; Kamel, Nidal; Reza, Faruque; Amin, Hafeez Ullah; Hussain, Muhammad

    2017-01-01

    Classification of the visual information from the brain activity data is a challenging task. Many studies reported in the literature are based on the brain activity patterns using either fMRI or EEG/MEG only. EEG and fMRI considered as two complementary neuroimaging modalities in terms of their temporal and spatial resolution to map the brain activity. For getting a high spatial and temporal resolution of the brain at the same time, simultaneous EEG-fMRI seems to be fruitful. In this article, we propose a new method based on simultaneous EEG-fMRI data and machine learning approach to classify the visual brain activity patterns. We acquired EEG-fMRI data simultaneously on the ten healthy human participants by showing them visual stimuli. Data fusion approach is used to merge EEG and fMRI data. Machine learning classifier is used for the classification purposes. Results showed that superior classification performance has been achieved with simultaneous EEG-fMRI data as compared to the EEG and fMRI data standalone. This shows that multimodal approach improved the classification accuracy results as compared with other approaches reported in the literature. The proposed simultaneous EEG-fMRI approach for classifying the brain activity patterns can be helpful to predict or fully decode the brain activity patterns.

  4. Online Multi-Modal Robust Non-Negative Dictionary Learning for Visual Tracking

    PubMed Central

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715

  5. Online multi-modal robust non-negative dictionary learning for visual tracking.

    PubMed

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  6. Active and passive spatial learning in human navigation: acquisition of graph knowledge.

    PubMed

    Chrastil, Elizabeth R; Warren, William H

    2015-07-01

    It is known that active exploration of a new environment leads to better spatial learning than does passive visual exposure. We ask whether specific components of active learning differentially contribute to particular forms of spatial knowledge-the exploration-specific learning hypothesis. Previously, we found that idiothetic information during walking is the primary active contributor to metric survey knowledge (Chrastil & Warren, 2013). In this study, we test the contributions of 3 components to topological graph and route knowledge: visual information, idiothetic information, and cognitive decision making. Four groups of participants learned the locations of 8 objects in a virtual hedge maze by (a) walking or (b) watching a video, crossed with (1) either making decisions about their path or (2) being guided through the maze. Route and graph knowledge were assessed by walking in the maze corridors from a starting object to the remembered location of a test object, with frequent detours. Decision making during exploration significantly contributed to subsequent route finding in the walking condition, whereas idiothetic information did not. Participants took novel routes and the metrically shortest routes on the majority of both direct and barrier trials, indicating that labeled graph knowledge-not merely route knowledge-was acquired. We conclude that, consistent with the exploration-specific learning hypothesis, decision making is the primary component of active learning for the acquisition of topological graph knowledge, whereas idiothetic information is the primary component for metric survey knowledge. (c) 2015 APA, all rights reserved.

  7. PIML: the Pathogen Information Markup Language.

    PubMed

    He, Yongqun; Vines, Richard R; Wattam, Alice R; Abramochkin, Georgiy V; Dickerman, Allan W; Eckart, J Dana; Sobral, Bruno W S

    2005-01-01

    A vast amount of information about human, animal and plant pathogens has been acquired, stored and displayed in varied formats through different resources, both electronically and otherwise. However, there is no community standard format for organizing this information or agreement on machine-readable format(s) for data exchange, thereby hampering interoperation efforts across information systems harboring such infectious disease data. The Pathogen Information Markup Language (PIML) is a free, open, XML-based format for representing pathogen information. XSLT-based visual presentations of valid PIML documents were developed and can be accessed through the PathInfo website or as part of the interoperable web services federation known as ToolBus/PathPort. Currently, detailed PIML documents are available for 21 pathogens deemed of high priority with regard to public health and national biological defense. A dynamic query system allows simple queries as well as comparisons among these pathogens. Continuing efforts are being taken to include other groups' supporting PIML and to develop more PIML documents. All the PIML-related information is accessible from http://www.vbi.vt.edu/pathport/pathinfo/

  8. Head-Up Auditory Displays for Traffic Collision Avoidance System Advisories: A Preliminary Investigation

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.

  9. Azimuthal phase retardation microscope for visualizing actin filaments of biological cells

    NASA Astrophysics Data System (ADS)

    Shin, In Hee; Shin, Sang-Mo

    2011-09-01

    We developed a new theory-based azimuthal phase retardation microscope to visualize distributions of actin filaments in biological cells without having them with exogenous dyes, fluorescence labels, or stains. The azimuthal phase retardation microscope visualizes distributions of actin filaments by measuring the intensity variations of each pixel of a charge coupled device camera while rotating a single linear polarizer. Azimuthal phase retardation δ between two fixed principal axes was obtained by calculating the rotation angles of the polarizer at the intensity minima from the acquired intensity data. We have acquired azimuthal phase retardation distributions of human breast cancer cell, MDA MB 231 by our microscope and compared the azimuthal phase retardation distributions with the fluorescence image of actin filaments by the commercial fluorescence microscope. Also, we have observed movement of human umbilical cord blood derived mesenchymal stem cells by measuring azimuthal phase retardation distributions.

  10. 3D visualization of Thoraco-Lumbar Spinal Lesions in German Shepherd Dog

    NASA Astrophysics Data System (ADS)

    Azpiroz, J.; Krafft, J.; Cadena, M.; Rodríguez, A. O.

    2006-09-01

    Computed tomography (CT) has been found to be an excellent imaging modality due to its sensitivity to characterize the morphology of the spine in dogs. This technique is considered to be particularly helpful for diagnosing spinal cord atrophy and spinal stenosis. The three-dimensional visualization of organs and bones can significantly improve the diagnosis of certain diseases in dogs. CT images were acquired of a German shepherd's dog spinal cord to generate stacks and digitally process them to arrange them in a volume image. All imaging experiments were acquired using standard clinical protocols on a clinical CT scanner. The three-dimensional visualization allowed us to observe anatomical structures that otherwise are not possible to observe with two-dimensional images. The combination of an imaging modality like CT together with imaging processing techniques can be a powerful tool for the diagnosis of a number of animal diseases.

  11. [The Strategic Organization of Skill

    NASA Technical Reports Server (NTRS)

    Roberts, Ralph

    1996-01-01

    Eye-movement software was developed in addition to several studies that focused on expert-novice differences in the acquisition and organization of skill. These studies focused on how increasingly complex strategies utilize and incorporate visual look-ahead to calibrate action. Software for collecting, calibrating, and scoring eye-movements was refined and updated. Some new algorithms were developed for analyzing corneal-reflection eye movement data that detect the location of saccadic eye movements in space and time. Two full-scale studies were carried out which examined how experts use foveal and peripheral vision to acquire information about upcoming environmental circumstances in order to plan future action(s) accordingly.

  12. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    PubMed

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  13. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training

    PubMed Central

    Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566

  14. Visualizing microvascular flow variation in OCTA using variable interscan time analysis (VISTA) (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Moult, Eric M.; Ploner, Stefan A.; Choi, WooJhon; Lee, ByungKun; Husvogt, Lennart A.; Lu, Chen D.; Novais, Eduardo; Cole, Emily D.; Potsaid, Benjamin M.; Duker, Jay S.; Hornegger, Joachim; Meier, Andreas K.; Waheed, Nadia K.; Fujimoto, James G.

    2017-02-01

    OCT angiography (OCTA) has recently garnered immense interest in clinical ophthalmology, permitting ocular vasculature to be viewed in exquisite detail, in vivo, and without the injection of exogenous dyes. However, commercial OCTA systems provide little information about actual erythrocyte speeds; instead, OCTA is typically used to visualize the presence and/or absence of vasculature. This is an important limitation because in many ocular diseases, including diabetic retinopathy (DR) and age-related macular degeneration (AMD), alterations in blood flow, but not necessarily only the presence or absence of vasculature, are thought to be important in understanding pathogenesis. To address this limitation, we have developed an algorithm, variable interscan time analysis (VISTA), which is capable of resolving different erythrocyte speeds. VISTA works by acquiring >2 repeated B-scans, and then computing multiple OCTA signals corresponding to different effective interscan times. The OCTA signals corresponding to different effective interscan times contain independent information about erythrocyte speed. In this study we provide a theoretical overview of VISTA, and investigate the utility of VISTA in studying blood flow alterations in ocular disease. OCTA-VISTA images of eyes with choroidal neovascularization, geographic atrophy, and diabetic retinopathy are presented.

  15. From basic to applied research to improve outcomes for individuals who require augmentative and alternative communication: potential contributions of eye tracking research methods.

    PubMed

    Light, Janice; McNaughton, David

    2014-06-01

    In order to improve outcomes for individuals who require AAC, there is an urgent need for research across the full spectrum--from basic research to investigate fundamental language and communication processes, to applied clinical research to test applications of this new knowledge in the real world. To date, there has been a notable lack of basic research in the AAC field to investigate the underlying cognitive, sensory perceptual, linguistic, and motor processes of individuals with complex communication needs. Eye tracking research technology provides a promising method for researchers to investigate some of the visual cognitive processes that underlie interaction via AAC. The eye tracking research technology automatically records the latency, duration, and sequence of visual fixations, providing key information on what elements attract the individual's attention (and which ones do not), for how long, and in what sequence. As illustrated by the papers in this special issue, this information can be used to improve the design of AAC systems, assessments, and interventions to better meet the needs of individuals with developmental and acquired disabilities who require AAC (e.g., individuals with autism spectrum disorders, Down syndrome, intellectual disabilities of unknown origin, aphasia).

  16. Thermal feature extraction of servers in a datacenter using thermal image registration

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  17. PeptideDepot: flexible relational database for visual analysis of quantitative proteomic data and integration of existing protein information.

    PubMed

    Yu, Kebing; Salomon, Arthur R

    2009-12-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through MS/MS. Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to various experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our high throughput autonomous proteomic pipeline used in the automated acquisition and post-acquisition analysis of proteomic data.

  18. Influence of eddy current, Maxwell and gradient field corrections on 3D flow visualization of 3D CINE PC-MRI data.

    PubMed

    Lorenz, Ramona; Bock, Jelena; Snyder, Jeff; Korvink, Jan G; Jung, Bernd A; Markl, Michael

    2014-07-01

    The measurement of velocities based on phase contrast MRI can be subject to different phase offset errors which can affect the accuracy of velocity data. The purpose of this study was to determine the impact of these inaccuracies and to evaluate different correction strategies on three-dimensional visualization. Phase contrast MRI was performed on a 3 T system (Siemens Trio) for in vitro (curved/straight tube models; venc: 0.3 m/s) and in vivo (aorta/intracranial vasculature; venc: 1.5/0.4 m/s) data. For comparison of the impact of different magnetic field gradient designs, in vitro data was additionally acquired on a wide bore 1.5 T system (Siemens Espree). Different correction methods were applied to correct for eddy currents, Maxwell terms, and gradient field inhomogeneities. The application of phase offset correction methods lead to an improvement of three-dimensional particle trace visualization and count. The most pronounced differences were found for in vivo/in vitro data (68%/82% more particle traces) acquired with a low venc (0.3 m/s/0.4 m/s, respectively). In vivo data acquired with high venc (1.5 m/s) showed noticeable but only minor improvement. This study suggests that the correction of phase offset errors can be important for a more reliable visualization of particle traces but is strongly dependent on the velocity sensitivity, object geometry, and gradient coil design. Copyright © 2013 Wiley Periodicals, Inc.

  19. Influence of Eddy Current, Maxwell and Gradient Field Corrections on 3D Flow Visualization of 3D CINE PC-MRI Data

    PubMed Central

    Lorenz, R.; Bock, J.; Snyder, J.; Korvink, J.G.; Jung, B.A.; Markl, M.

    2013-01-01

    Purpose The measurement of velocities based on PC-MRI can be subject to different phase offset errors which can affect the accuracy of velocity data. The purpose of this study was to determine the impact of these inaccuracies and to evaluate different correction strategies on 3D visualization. Methods PC-MRI was performed on a 3 T system (Siemens Trio) for in vitro (curved/straight tube models; venc: 0.3 m/s) and in vivo (aorta/intracranial vasculature; venc: 1.5/0.4 m/s) data. For comparison of the impact of different magnetic field gradient designs, in vitro data was additionally acquired on a wide bore 1.5 T system (Siemens Espree). Different correction methods were applied to correct for eddy currents, Maxwell terms and gradient field inhomogeneities. Results The application of phase offset correction methods lead to an improvement of 3D particle trace visualization and count. The most pronounced differences were found for in vivo/in vitro data (68%/82% more particle traces) acquired with a low venc (0.3 m/s/0.4 m/s, respectively). In vivo data acquired with high venc (1.5 m/s) showed noticeable but only minor improvement. Conclusion This study suggests that the correction of phase offset errors can be important for a more reliable visualization of particle traces but is strongly dependent on the velocity sensitivity, object geometry, and gradient coil design. PMID:24006013

  20. Discriminative stimuli that control instrumental tobacco-seeking by human smokers also command selective attention.

    PubMed

    Hogarth, Lee; Dickinson, Anthony; Duka, Theodora

    2003-08-01

    Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.

  1. Big Data and Deep data in scanning and electron microscopies: functionality from multidimensional data sets

    DOE PAGES

    Belianinov, Alex; Vasudevan, Rama K; Strelcov, Evgheni; ...

    2015-05-13

    The development of electron, and scanning probe microscopies in the second half of the twentieth century have produced spectacular images of internal structure and composition of matter with, at nanometer, molecular, and atomic resolution. Largely, this progress was enabled by computer-assisted methods of microscope operation, data acquisition and analysis. The progress in imaging technologies in the beginning of the twenty first century has opened the proverbial floodgates of high-veracity information on structure and functionality. High resolution imaging now allows information on atomic positions with picometer precision, allowing for quantitative measurements of individual bond length and angles. Functional imaging often leadsmore » to multidimensional data sets containing partial or full information on properties of interest, acquired as a function of multiple parameters (time, temperature, or other external stimuli). Here, we review several recent applications of the big and deep data analysis methods to visualize, compress, and translate this data into physically and chemically relevant information from imaging data.« less

  2. Big Data and Deep data in scanning and electron microscopies: functionality from multidimensional data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belianinov, Alex; Vasudevan, Rama K; Strelcov, Evgheni

    The development of electron, and scanning probe microscopies in the second half of the twentieth century have produced spectacular images of internal structure and composition of matter with, at nanometer, molecular, and atomic resolution. Largely, this progress was enabled by computer-assisted methods of microscope operation, data acquisition and analysis. The progress in imaging technologies in the beginning of the twenty first century has opened the proverbial floodgates of high-veracity information on structure and functionality. High resolution imaging now allows information on atomic positions with picometer precision, allowing for quantitative measurements of individual bond length and angles. Functional imaging often leadsmore » to multidimensional data sets containing partial or full information on properties of interest, acquired as a function of multiple parameters (time, temperature, or other external stimuli). Here, we review several recent applications of the big and deep data analysis methods to visualize, compress, and translate this data into physically and chemically relevant information from imaging data.« less

  3. Gross Motor Skill Acquisition in Adolescents with Down Syndrome

    ERIC Educational Resources Information Center

    Meegan, Sarah; Maraj, Brian K. V.; Weeks, Daniel; Chua, Romeo

    2006-01-01

    The purpose of this study was to assess whether verbal-motor performances deficits exhibited by individuals with Down syndrome limited their ability to acquire gross motor skills when given visual and verbal instruction together and then transferred to either a visual or verbal instructional mode to reproduce the movement. Nine individuals with…

  4. Age-of-Acquisition Effects in Visual Word Recognition: Evidence from Expert Vocabularies

    ERIC Educational Resources Information Center

    Stadthagen-Gonzalez, Hans; Bowers, Jeffrey S.; Damian, Markus F.

    2004-01-01

    Three experiments assessed the contributions of age-of-acquisition (AoA) and frequency to visual word recognition. Three databases were created from electronic journals in chemistry, psychology and geology in order to identify technical words that are extremely frequent in each discipline but acquired late in life. In Experiment 1, psychologists…

  5. Acquisition of Visual Perceptual Skills from Worked Examples: Learning to Interpret Electrocardiograms (ECGs)

    ERIC Educational Resources Information Center

    van den Berge, Kees; van Gog, Tamara; Mamede, Silvia; Schmidt, Henk G.; van Saase, Jan L. C. M.; Rikers, Remy M. J. P.

    2013-01-01

    Research has shown that for acquiring problem-solving skills, instruction consisting of studying worked examples is more effective and efficient for novice learners than instruction consisting of problem-solving. This study investigated whether worked examples would also be a useful instructional format for the acquisition of visual perceptual…

  6. Acquisition of Chinese characters: the effects of character properties and individual differences among second language learners

    PubMed Central

    Kuo, Li-Jen; Kim, Tae-Jin; Yang, Xinyuan; Li, Huiwen; Liu, Yan; Wang, Haixia; Hyun Park, Jeong; Li, Ying

    2015-01-01

    In light of the dramatic growth of Chinese learners worldwide and a need for cross-linguistic research on Chinese literacy development, this study drew upon theories of visual complexity effect (Su and Samuels, 2010) and dual-coding processing (Sadoski and Paivio, 2013) and investigated (a) the effects of character properties (i.e., visual complexity and radical presence) on character acquisition and (b) the relationship between individual learner differences in radical awareness and character acquisition. Participants included adolescent English-speaking beginning learners of Chinese in the U.S. Following Kuo et al. (2014), a novel character acquisition task was used to investigate the process of acquiring the meaning of new characters. Results showed that (a) characters with radicals and with less visual complexity were easier to acquire than characters without radicals and with greater visual complexity; and (b) individual differences in radical awareness were associated with the acquisition of all types of characters, but the association was more pronounced with the acquisition of characters with radicals. Theoretical and practical implications of the findings were discussed. PMID:26379562

  7. The forensic validity of visual analytics

    NASA Astrophysics Data System (ADS)

    Erbacher, Robert F.

    2008-01-01

    The wider use of visualization and visual analytics in wide ranging fields has led to the need for visual analytics capabilities to be legally admissible, especially when applied to digital forensics. This brings the need to consider legal implications when performing visual analytics, an issue not traditionally examined in visualization and visual analytics techniques and research. While digital data is generally admissible under the Federal Rules of Evidence [10][21], a comprehensive validation of the digital evidence is considered prudent. A comprehensive validation requires validation of the digital data under rules for authentication, hearsay, best evidence rule, and privilege. Additional issues with digital data arise when exploring digital data related to admissibility and the validity of what information was examined, to what extent, and whether the analysis process was sufficiently covered by a search warrant. For instance, a search warrant generally covers very narrow requirements as to what law enforcement is allowed to examine and acquire during an investigation. When searching a hard drive for child pornography, how admissible is evidence of an unrelated crime, i.e. drug dealing. This is further complicated by the concept of "in plain view". When performing an analysis of a hard drive what would be considered "in plain view" when analyzing a hard drive. The purpose of this paper is to discuss the issues of digital forensics and the related issues as they apply to visual analytics and identify how visual analytics techniques fit into the digital forensics analysis process, how visual analytics techniques can improve the legal admissibility of digital data, and identify what research is needed to further improve this process. The goal of this paper is to open up consideration of legal ramifications among the visualization community; the author is not a lawyer and the discussions are not meant to be inclusive of all differences in laws between states and countries.

  8. Pre-clinical evaluation of a nanoparticle-based blood-pool contrast agent for MR imaging of the placenta.

    PubMed

    Ghaghada, Ketan B; Starosolski, Zbigniew A; Bhayana, Saakshi; Stupin, Igor; Patel, Chandreshkumar V; Bhavane, Rohan C; Gao, Haijun; Bednov, Andrey; Yallampalli, Chandrasekhar; Belfort, Michael; George, Verghese; Annapragada, Ananth V

    2017-09-01

    Non-invasive 3D imaging that enables clear visualization of placental margins is of interest in the accurate diagnosis of placental pathologies. This study investigated if contrast-enhanced MRI performed using a liposomal gadolinium blood-pool contrast agent (liposomal-Gd) enables clear visualization of the placental margins and the placental-myometrial interface (retroplacental space). Non-contrast MRI and contrast-enhanced MRI using a clinically approved conventional contrast agent were used as comparators. Studies were performed in pregnant rats under an approved protocol. MRI was performed at 1T using a permanent magnet small animal scanner. Pre-contrast and post-liposomal-Gd contrast images were acquired using T1-weighted and T2-weighted sequences. Dynamic Contrast enhanced MRI (DCE-MRI) was performed using gadoterate meglumine (Gd-DOTA, Dotarem ® ). Visualization of the retroplacental clear space, a marker of normal placentation, was judged by a trained radiologist. Signal-to-noise (SNR) and contrast-to-noise (CNR) ratios were calculated for both single and averaged acquisitions. Images were reviewed by a radiologist and scored for the visualization of placental features. Contrast-enhanced CT (CE-CT) imaging using a liposomal CT agent was performed for confirmation of the MR findings. Transplacental transport of liposomal-Gd was evaluated by post-mortem elemental analysis of tissues. Ex-vivo studies in perfused human placentae from normal, GDM, and IUGR pregnancies evaluated the transport of liposomal agent across the human placental barrier. Post-contrast T1w images acquired with liposomal-Gd demonstrated significantly higher SNR (p = 0.0002) in the placenta compared to pre-contrast images (28.0 ± 4.7 vs. 6.9 ± 1.8). No significant differences (p = 0.39) were noted between SNR in pre-contrast and post-contrast liposomal-Gd images of the amniotic fluid, indicating absence of transplacental passage of the agent. The placental margins were significantly (p < 0.001) better visualized on post-contrast liposomal-Gd images. DCE-MRI with the conventional Gd agent demonstrated retrograde opacification of the placenta from fetal edge to the myometrium, consistent with the anatomy of the rat placenta. However, no consistent and reproducible visualization of the retroplacental space was demonstrated on the conventional Gd-enhanced images. The retroplacental space was only visualized on post-contrast T1w images acquired using the liposomal agent (SNR = 15.5 ± 3.4) as a sharply defined, hypo-enhanced interface. The retroplacental space was also visible as a similar hypo-enhancing interface on CE-CT images acquired using a liposomal CT contrast agent. Tissue analysis demonstrated undetectably low transplacental permeation of liposomal-Gd, and was confirmed by lack of permeation through a perfused human placental model. Contrast-enhanced T1w-MRI performed using liposomal-Gd enabled clear visualization of placental margins and delineation of the retroplacental space from the rest of the placenta; the space is undetectable on non-contrast imaging and on post-contrast T1w images acquired using a conventional, clinically approved Gd chelate contrast agent. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition.

    PubMed

    Lagorce, Xavier; Orchard, Garrick; Galluppi, Francesco; Shi, Bertram E; Benosman, Ryad B

    2017-07-01

    This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.

  10. MATISSE a web-based tool to access, visualize and analyze high resolution minor bodies observation

    NASA Astrophysics Data System (ADS)

    Zinzi, Angelo; Capria, Maria Teresa; Palomba, Ernesto; Antonelli, Lucio Angelo; Giommi, Paolo

    2016-07-01

    In the recent years planetary exploration missions acquired data from minor bodies (i.e., dwarf planets, asteroid and comets) at a detail level never reached before. Since these objects often present very irregular shapes (as in the case of the comet 67P Churyumov-Gerasimenko target of the ESA Rosetta mission) "classical" bidimensional projections of observations are difficult to understand. With the aim of providing the scientific community a tool to access, visualize and analyze data in a new way, ASI Science Data Center started to develop MATISSE (Multi-purposed Advanced Tool for the Instruments for the Solar System Exploration - http://tools.asdc.asi.it/matisse.jsp) in late 2012. This tool allows 3D web-based visualization of data acquired by planetary exploration missions: the output could either be the straightforward projection of the selected observation over the shape model of the target body or the visualization of a high-order product (average/mosaic, difference, ratio, RGB) computed directly online with MATISSE. Standard outputs of the tool also comprise downloadable files to be used with GIS software (GeoTIFF and ENVI format) and 3D very high-resolution files to be viewed by means of the free software Paraview. During this period the first and most frequent exploitation of the tool has been related to visualization of data acquired by VIRTIS-M instruments onboard Rosetta observing the comet 67P. The success of this task, well represented by the good number of published works that used images made with MATISSE confirmed the need of a different approach to correctly visualize data coming from irregular shaped bodies. In the next future the datasets available to MATISSE are planned to be extended, starting from the addition of VIR-Dawn observations of both Vesta and Ceres and also using standard protocols to access data stored in external repositories, such as NASA ODE and Planetary VO.

  11. Acquired color vision deficiency.

    PubMed

    Simunovic, Matthew P

    2016-01-01

    Acquired color vision deficiency occurs as the result of ocular, neurologic, or systemic disease. A wide array of conditions may affect color vision, ranging from diseases of the ocular media through to pathology of the visual cortex. Traditionally, acquired color vision deficiency is considered a separate entity from congenital color vision deficiency, although emerging clinical and molecular genetic data would suggest a degree of overlap. We review the pathophysiology of acquired color vision deficiency, the data on its prevalence, theories for the preponderance of acquired S-mechanism (or tritan) deficiency, and discuss tests of color vision. We also briefly review the types of color vision deficiencies encountered in ocular disease, with an emphasis placed on larger or more detailed clinical investigations. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. What Are They Up To? The Role of Sensory Evidence and Prior Knowledge in Action Understanding

    PubMed Central

    Chambon, Valerian; Domenech, Philippe; Pacherie, Elisabeth; Koechlin, Etienne; Baraduc, Pierre; Farrer, Chlöé

    2011-01-01

    Explaining or predicting the behaviour of our conspecifics requires the ability to infer the intentions that motivate it. Such inferences are assumed to rely on two types of information: (1) the sensory information conveyed by movement kinematics and (2) the observer's prior expectations – acquired from past experience or derived from prior knowledge. However, the respective contribution of these two sources of information is still controversial. This controversy stems in part from the fact that “intention” is an umbrella term that may embrace various sub-types each being assigned different scopes and targets. We hypothesized that variations in the scope and target of intentions may account for variations in the contribution of visual kinematics and prior knowledge to the intention inference process. To test this hypothesis, we conducted four behavioural experiments in which participants were instructed to identify different types of intention: basic intentions (i.e. simple goal of a motor act), superordinate intentions (i.e. general goal of a sequence of motor acts), or social intentions (i.e. intentions accomplished in a context of reciprocal interaction). For each of the above-mentioned intentions, we varied (1) the amount of visual information available from the action scene and (2) participant's prior expectations concerning the intention that was more likely to be accomplished. First, we showed that intentional judgments depend on a consistent interaction between visual information and participant's prior expectations. Moreover, we demonstrated that this interaction varied according to the type of intention to be inferred, with participant's priors rather than perceptual evidence exerting a greater effect on the inference of social and superordinate intentions. The results are discussed by appealing to the specific properties of each type of intention considered and further interpreted in the light of a hierarchical model of action representation. PMID:21364992

  13. Slow Scan Telemedicine

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Originally developed under contract for NASA by Ball Bros. Research Corporation for acquiring visual information from lunar and planetary spacecraft, system uses standard closed circuit camera connected to a device called a scan converter, which slows the stream of images to match an audio circuit, such as a telephone line. Transmitted to its destination, the image is reconverted by another scan converter and displayed on a monitor. In addition to assist scans, technique allows transmission of x-rays, nuclear scans, ultrasonic imagery, thermograms, electrocardiograms or live views of patient. Also allows conferencing and consultation among medical centers, general practitioners, specialists and disease control centers. Commercialized by Colorado Video, Inc., major employment is in business and industry for teleconferencing, cable TV news, transmission of scientific/engineering data, security, information retrieval, insurance claim adjustment, instructional programs, and remote viewing of advertising layouts, real estate, construction sites or products.

  14. A Survey of Colormaps in Visualization

    PubMed Central

    Zhou, Liang; Hansen, Charles D.

    2016-01-01

    Colormaps are a vital method for users to gain insights into data in a visualization. With a good choice of colormaps, users are able to acquire information in the data more effectively and efficiently. In this survey, we attempt to provide readers with a comprehensive review of colormap generation techniques and provide readers a taxonomy which is helpful for finding appropriate techniques to use for their data and applications. Specifically, we first briefly introduce the basics of color spaces including color appearance models. In the core of our paper, we survey colormap generation techniques, including the latest advances in the field by grouping these techniques into four classes: procedural methods, user-study based methods, rule-based methods, and data-driven methods; we also include a section on methods that are beyond pure data comprehension purposes. We then classify colormapping techniques into a taxonomy for readers to quickly identify the appropriate techniques they might use. Furthermore, a representative set of visualization techniques that explicitly discuss the use of colormaps is reviewed and classified based on the nature of the data in these applications. Our paper is also intended to be a reference of colormap choices for readers when they are faced with similar data and/or tasks. PMID:26513793

  15. Semantic-based surveillance video retrieval.

    PubMed

    Hu, Weiming; Xie, Dan; Fu, Zhouyu; Zeng, Wenrong; Maybank, Steve

    2007-04-01

    Visual surveillance produces large amounts of video data. Effective indexing and retrieval from surveillance video databases are very important. Although there are many ways to represent the content of video clips in current video retrieval algorithms, there still exists a semantic gap between users and retrieval systems. Visual surveillance systems supply a platform for investigating semantic-based video retrieval. In this paper, a semantic-based video retrieval framework for visual surveillance is proposed. A cluster-based tracking algorithm is developed to acquire motion trajectories. The trajectories are then clustered hierarchically using the spatial and temporal information, to learn activity models. A hierarchical structure of semantic indexing and retrieval of object activities, where each individual activity automatically inherits all the semantic descriptions of the activity model to which it belongs, is proposed for accessing video clips and individual objects at the semantic level. The proposed retrieval framework supports various queries including queries by keywords, multiple object queries, and queries by sketch. For multiple object queries, succession and simultaneity restrictions, together with depth and breadth first orders, are considered. For sketch-based queries, a method for matching trajectories drawn by users to spatial trajectories is proposed. The effectiveness and efficiency of our framework are tested in a crowded traffic scene.

  16. Rolling ball sifting algorithm for the augmented visual inspection of carotid bruit auscultation

    NASA Astrophysics Data System (ADS)

    Huang, Adam; Lee, Chung-Wei; Liu, Hon-Man

    2016-07-01

    Carotid bruits are systolic sounds associated with turbulent blood flow through atherosclerotic stenosis in the neck. They are audible intermittent high-frequency (above 200 Hz) sounds mixed with background noise and transmitted low-frequency (below 100 Hz) heart sounds that wax and wane periodically. It is a nontrivial task to extract both bruits and heart sounds with high fidelity for further computer-aided auscultation and diagnosis. In this paper we propose a rolling ball sifting algorithm that is capable to filter signals with a sharper frequency selectivity mechanism in the time domain. By rolling two balls (one above and one below the signal) of a suitable radius, the balls are large enough to roll over bruits and yet small enough to ride on heart sound waveforms. The high-frequency bruits can then be extracted according to a tangibility criterion by using the local extrema touched by the balls. Similarly, the low-frequency heart sounds can be acquired by a larger radius. By visualizing the periodicity information of both the extracted heart sounds and bruits, the proposed visual inspection method can potentially improve carotid bruit diagnosis accuracy.

  17. Rapid fusion of 2D X-ray fluoroscopy with 3D multislice CT for image-guided electrophysiology procedures

    NASA Astrophysics Data System (ADS)

    Zagorchev, Lyubomir; Manzke, Robert; Cury, Ricardo; Reddy, Vivek Y.; Chan, Raymond C.

    2007-03-01

    Interventional cardiac electrophysiology (EP) procedures are typically performed under X-ray fluoroscopy for visualizing catheters and EP devices relative to other highly-attenuating structures such as the thoracic spine and ribs. These projections do not however contain information about soft-tissue anatomy and there is a recognized need for fusion of conventional fluoroscopy with pre-operatively acquired cardiac multislice computed tomography (MSCT) volumes. Rapid 2D-3D integration in this application would allow for real-time visualization of all catheters present within the thorax in relation to the cardiovascular anatomy visible in MSCT. We present a method for rapid fusion of 2D X-ray fluoroscopy with 3DMSCT that can facilitate EP mapping and interventional procedures by reducing the need for intra-operative contrast injections to visualize heart chambers and specialized systems to track catheters within the cardiovascular anatomy. We use hardware-accelerated ray-casting to compute digitally reconstructed radiographs (DRRs) from the MSCT volume and iteratively optimize the rigid-body pose of the volumetric data to maximize the similarity between the MSCT-derived DRR and the intra-operative X-ray projection data.

  18. Using Anatomic Magnetic Resonance Image Information to Enhance Visualization and Interpretation of Functional Images: A Comparison of Methods Applied to Clinical Arterial Spin Labeling Images

    PubMed Central

    Dai, Weiying; Soman, Salil; Hackney, David B.; Wong, Eric T.; Robson, Philip M.; Alsop, David C.

    2017-01-01

    Functional imaging provides hemodynamic and metabolic information and is increasingly being incorporated into clinical diagnostic and research studies. Typically functional images have reduced signal-to-noise ratio and spatial resolution compared to other non-functional cross sectional images obtained as part of a routine clinical protocol. We hypothesized that enhancing visualization and interpretation of functional images with anatomic information could provide preferable quality and superior diagnostic value. In this work, we implemented five methods (frequency addition, frequency multiplication, wavelet transform, non-subsampled contourlet transform and intensity-hue-saturation) and a newly proposed ShArpening by Local Similarity with Anatomic images (SALSA) method to enhance the visualization of functional images, while preserving the original functional contrast and quantitative signal intensity characteristics over larger spatial scales. Arterial spin labeling blood flow MR images of the brain were visualization enhanced using anatomic images with multiple contrasts. The algorithms were validated on a numerical phantom and their performance on images of brain tumor patients were assessed by quantitative metrics and neuroradiologist subjective ratings. The frequency multiplication method had the lowest residual error for preserving the original functional image contrast at larger spatial scales (55%–98% of the other methods with simulated data and 64%–86% with experimental data). It was also significantly more highly graded by the radiologists (p<0.005 for clear brain anatomy around the tumor). Compared to other methods, the SALSA provided 11%–133% higher similarity with ground truth images in the simulation and showed just slightly lower neuroradiologist grading score. Most of these monochrome methods do not require any prior knowledge about the functional and anatomic image characteristics, except the acquired resolution. Hence, automatic implementation on clinical images should be readily feasible. PMID:27723582

  19. What Could You Really Learn on Your Own?: Understanding the Epistemic Limitations of Knowledge Acquisition

    PubMed Central

    Lockhart, Kristi L.; Goddu, Mariel K.; Smith, Eric D.; Keil, Frank C.

    2015-01-01

    Three studies explored the abilities of 205 children (5–11 years) and 74 adults (18–72 years) to distinguish directly vs. indirectly acquired information in a scenario where an individual grew up in isolation from human culture. Directly acquired information is knowledge acquired through first-hand experience. Indirectly acquired information is knowledge that requires input from others. All children distinguished directly from indirectly acquired knowledge (Studies 1–3), even when the indirectly acquired knowledge was highly familiar (Study 2). All children also distinguished difficult-to-acquire direct knowledge from simple-to-acquire direct knowledge (Study 3). The major developmental change was the increasing ability to completely rule out indirect knowledge as possible for an isolated individual to acquire. PMID:26660001

  20. Emergence of Joint Attention through Bootstrap Learning based on the Mechanisms of Visual Attention and Learning with Self-evaluation

    NASA Astrophysics Data System (ADS)

    Nagai, Yukie; Hosoda, Koh; Morita, Akio; Asada, Minoru

    This study argues how human infants acquire the ability of joint attention through interactions with their caregivers from a viewpoint of cognitive developmental robotics. In this paper, a mechanism by which a robot acquires sensorimotor coordination for joint attention through bootstrap learning is described. Bootstrap learning is a process by which a learner acquires higher capabilities through interactions with its environment based on embedded lower capabilities even if the learner does not receive any external evaluation nor the environment is controlled. The proposed mechanism for bootstrap learning of joint attention consists of the robot's embedded mechanisms: visual attention and learning with self-evaluation. The former is to find and attend to a salient object in the field of the robot's view, and the latter is to evaluate the success of visual attention, not joint attention, and then to learn the sensorimotor coordination. Since the object which the robot looks at based on visual attention does not always correspond to the object which the caregiver is looking at in an environment including multiple objects, the robot may have incorrect learning situations for joint attention as well as correct ones. However, the robot is expected to statistically lose the learning data of the incorrect ones as outliers because of its weaker correlation between the sensor input and the motor output than that of the correct ones, and consequently to acquire appropriate sensorimotor coordination for joint attention even if the caregiver does not provide any task evaluation to the robot. The experimental results show the validity of the proposed mechanism. It is suggested that the proposed mechanism could explain the developmental mechanism of infants' joint attention because the learning process of the robot's joint attention can be regarded as equivalent to the developmental process of infants' one.

  1. An objective electrophysiological marker of face individualisation impairment in acquired prosopagnosia with fast periodic visual stimulation.

    PubMed

    Liu-Shuang, Joan; Torfs, Katrien; Rossion, Bruno

    2016-03-01

    One of the most striking pieces of evidence for a specialised face processing system in humans is acquired prosopagnosia, i.e. the inability to individualise faces following brain damage. However, a sensitive and objective non-behavioural marker for this deficit is difficult to provide with standard event-related potentials (ERPs), such as the well-known face-related N170 component reported and investigated in-depth by our late distinguished colleague Shlomo Bentin. Here we demonstrate that fast periodic visual stimulation (FPVS) in electrophysiology can quantify face individualisation impairment in acquired prosopagnosia. In Experiment 1 (Liu-Shuang et al., 2014), identical faces were presented at a rate of 5.88 Hz (i.e., ≈ 6 images/s, SOA=170 ms, 1 fixation per image), with different faces appearing every 5th face (5.88 Hz/5=1.18 Hz). Responses of interest were identified at these predetermined frequencies (i.e., objectively) in the EEG frequency-domain data. A well-studied case of acquired prosopagnosia (PS) and a group of age- and gender-matched controls completed only 4 × 1-min stimulation sequences, with an orthogonal fixation cross task. Contrarily to controls, PS did not show face individualisation responses at 1.18 Hz, in line with her prosopagnosia. However, her response at 5.88 Hz, reflecting general visual processing, was within the normal range. In Experiment 2 (Rossion et al., 2015), we presented natural (i.e., unsegmented) images of objects at 5.88 Hz, with face images shown every 5th image (1.18 Hz). In accordance with her preserved ability to categorise a face as a face, and despite extensive brain lesions potentially affecting the overall EEG signal-to-noise ratio, PS showed 1.18 Hz face-selective responses within the normal range. Collectively, these findings show that fast periodic visual stimulation provides objective and sensitive electrophysiological markers of preserved and impaired face processing abilities in the neuropsychological population. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Reading faces: investigating the use of a novel face-based orthography in acquired alexia.

    PubMed

    Moore, Michelle W; Brendel, Paul C; Fiez, Julie A

    2014-02-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic "FaceFont" orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a "linguistic bridge" into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Reading faces: Investigating the use of a novel face-based orthography in acquired alexia

    PubMed Central

    Moore, Michelle W.; Brendel, Paul C.; Fiez, Julie A.

    2014-01-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic “FaceFont” orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a “linguistic bridge” into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. PMID:24463310

  4. Comparison of clinician-predicted to measured low vision outcomes.

    PubMed

    Chan, Tiffany L; Goldstein, Judith E; Massof, Robert W

    2013-08-01

    To compare low-vision rehabilitation (LVR) clinicians' predictions of the probability of success of LVR with patients' self-reported outcomes after provision of usual outpatient LVR services and to determine if patients' traits influence clinician ratings. The Activity Inventory (AI), a self-report visual function questionnaire, was administered pre-and post-LVR to 316 low-vision patients served by 28 LVR centers that participated in a collaborative observational study. The physical component of the Short Form-36, Geriatric Depression Scale, and Telephone Interview for Cognitive Status were also administered pre-LVR to measure physical capability, depression, and cognitive status. After patient evaluation, 38 LVR clinicians estimated the probability of outcome success (POS) using their own criteria. The POS ratings and change in functional ability were used to assess the effects of patients' baseline traits on predicted outcomes. A regression analysis with a hierarchical random-effects model showed no relationship between LVR physician POS estimates and AI-based outcomes. In another analysis, kappa statistics were calculated to determine the probability of agreement between POS and AI-based outcomes for different outcome criteria. Across all comparisons, none of the kappa values were significantly different from 0, which indicates that the rate of agreement is equivalent to chance. In an exploratory analysis, hierarchical mixed-effects regression models show that POS ratings are associated with information about the patient's cognitive functioning and the combination of visual acuity and functional ability, as opposed to visual acuity or functional ability alone. Clinicians' predictions of LVR outcomes seem to be influenced by knowledge of patients' cognitive functioning and the combination of visual acuity and functional ability-information clinicians acquire from the patient's history and examination. However, clinicians' predictions do not agree with observed changes in functional ability from the patient's perspective; they are no better than chance.

  5. Visual Literacy and Cultural Production: Examining Black Masculinity through Participatory Community Engagement

    ERIC Educational Resources Information Center

    White, Theresa Renee

    2012-01-01

    This paper highlights the results of a project, in which a group of students, who were enrolled in an African American film criticism course at a large university in Southern California, participated in a community engagement project that incorporated visual and media literacy skills acquired in the classroom setting. The parameters of the project…

  6. Children's Understanding of Globes as a Model of the Earth: A Problem of Contextualizing

    ERIC Educational Resources Information Center

    Ehrlen, Karin

    2008-01-01

    Visual representations play an important role in science teaching. The way in which visual representations may help children to acquire scientific concepts is a crucial test in the debate between constructivist and socio-cultural oriented researchers. In this paper, the question is addressed as a problem of how to contextualize conceptions and…

  7. The Visual Representation and Acquisition of Driving Knowledge for Autonomous Vehicle

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxia; Jiang, Qing; Li, Ping; Song, LiangTu; Wang, Rujing; Yu, Biao; Mei, Tao

    2017-09-01

    In this paper, the driving knowledge base of autonomous vehicle is designed. Based on the driving knowledge modeling system, the driving knowledge of autonomous vehicle is visually acquired, managed, stored, and maintenanced, which has vital significance for creating the development platform of intelligent decision-making systems of automatic driving expert systems for autonomous vehicle.

  8. Toward the Development of a Self-Management Intervention to Promote Pro-Social Behaviors for Students with Visual Impairment

    ERIC Educational Resources Information Center

    Ivy, Sarah E.; Lather, Amanda B.; Hatton, Deborah D.; Wehby, Joseph H.

    2016-01-01

    Students with visual impairment (VI) lack access to the same models and reinforcers as students with sight. Consequentially, behaviors that children with sight acquire through observation must be explicitly taught to children with VI. In addition, children with VI have difficulty maintaining such behaviors. Therefore, interventions that promote…

  9. Optimal visual-haptic integration with articulated tools.

    PubMed

    Takahashi, Chie; Watt, Simon J

    2017-05-01

    When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.

  10. A Context-Aware Method for Authentically Simulating Outdoors Shadows for Mobile Augmented Reality.

    PubMed

    Barreira, Joao; Bessa, Maximino; Barbosa, Luis; Magalhaes, Luis

    2018-03-01

    Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.

  11. A temporal comparison of forest cover using digital earth science data and visualization techniques

    USGS Publications Warehouse

    Jones, John W.

    1993-01-01

    Increased demands on forest resources and the recognition of old-growth forests as critical habitats and purifiers of the atmosphere have stimulated attention to forest harvest practices in the United States and worldwide. Visualization technology provides a means by which a history of forestry activities may be documented and presented to the public and decisionmakers. In this project, landsat multispectral scanner and thematic mapper images, acquired July 7, 1981, and July 8, 1991, respectively, were georeferenced, resampled, enhanced, and draped over U.S. Geological Survey 30-meter digital elevation models. These data then were used to create perspective views of portions of Mt. Hood Forest, Oregon. The "fly-by" animation (produced by rapidly displaying a sequence of these perspective views) conveys the forest cover change resulting from forest harvest activities over the 10-year period. This project shows the value of combining satellite data with base cartographic data and earth science information for use in public education and decision-making processes.

  12. What puts the how in where? Tool use and the divided visual streams hypothesis.

    PubMed

    Frey, Scott H

    2007-04-01

    An influential theory suggests that the dorsal (occipito-parietal) visual stream computes representations of objects for purposes of guiding actions (determining 'how') independently of ventral (occipito-temporal) stream processes supporting object recognition and semantic processing (determining 'what'). Yet, the ability of the dorsal stream alone to account for one of the most common forms of human action, tool use, is limited. While experience-dependent modifications to existing dorsal stream representations may explain simple tool use behaviors (e.g., using sticks to extend reach) found among a variety of species, skillful use of manipulable artifacts (e.g., cups, hammers, pencils) requires in addition access to semantic representations of objects' functions and uses. Functional neuroimaging suggests that this latter information is represented in a left-lateralized network of temporal, frontal and parietal areas. I submit that the well-established dominance of the human left hemisphere in the representation of familiar skills stems from the ability for this acquired knowledge to influence the organization of actions within the dorsal pathway.

  13. The Limits of Shape Recognition following Late Emergence from Blindness.

    PubMed

    McKyton, Ayelet; Ben-Zion, Itay; Doron, Ravid; Zohary, Ehud

    2015-09-21

    Visual object recognition develops during the first years of life. But what if one is deprived of vision during early post-natal development? Shape information is extracted using both low-level cues (e.g., intensity- or color-based contours) and more complex algorithms that are largely based on inference assumptions (e.g., illumination is from above, objects are often partially occluded). Previous studies, testing visual acuity using a 2D shape-identification task (Lea symbols), indicate that contour-based shape recognition can improve with visual experience, even after years of visual deprivation from birth. We hypothesized that this may generalize to other low-level cues (shape, size, and color), but not to mid-level functions (e.g., 3D shape from shading) that might require prior visual knowledge. To that end, we studied a unique group of subjects in Ethiopia that suffered from an early manifestation of dense bilateral cataracts and were surgically treated only years later. Our results suggest that the newly sighted rapidly acquire the ability to recognize an odd element within an array, on the basis of color, size, or shape differences. However, they are generally unable to find the odd shape on the basis of illusory contours, shading, or occlusion relationships. Little recovery of these mid-level functions is seen within 1 year post-operation. We find that visual performance using low-level cues is relatively robust to prolonged deprivation from birth. However, the use of pictorial depth cues to infer 3D structure from the 2D retinal image is highly susceptible to early and prolonged visual deprivation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. “To see or not to see: that is the question.” The “Protection-Against-Schizophrenia” (PaSZ) model: evidence from congenital blindness and visuo-cognitive aberrations

    PubMed Central

    Landgraf, Steffen; Osterheider, Michael

    2013-01-01

    The causes of schizophrenia are still unknown. For the last 100 years, though, both “absent” and “perfect” vision have been associated with a lower risk for schizophrenia. Hence, vision itself and aberrations in visual functioning may be fundamental to the development and etiological explanations of the disorder. In this paper, we present the “Protection-Against-Schizophrenia” (PaSZ) model, which grades the risk for developing schizophrenia as a function of an individual's visual capacity. We review two vision perspectives: (1) “Absent” vision or how congenital blindness contributes to PaSZ and (2) “perfect” vision or how aberrations in visual functioning are associated with psychosis. First, we illustrate that, although congenitally blind and sighted individuals acquire similar world representations, blind individuals compensate for behavioral shortcomings through neurofunctional and multisensory reorganization. These reorganizations may indicate etiological explanations for their PaSZ. Second, we demonstrate that visuo-cognitive impairments are fundamental for the development of schizophrenia. Deteriorated visual information acquisition and processing contribute to higher-order cognitive dysfunctions and subsequently to schizophrenic symptoms. Finally, we provide different specific therapeutic recommendations for individuals who suffer from visual impairments (who never developed “normal” vision) and individuals who suffer from visual deterioration (who previously had “normal” visual skills). Rather than categorizing individuals as “normal” and “mentally disordered,” the PaSZ model uses a continuous scale to represent psychiatrically relevant human behavior. This not only provides a scientific basis for more fine-grained diagnostic assessments, earlier detection, and more appropriate therapeutic assignments, but it also outlines a trajectory for unraveling the causes of abnormal psychotic human self- and world-perception. PMID:23847557

  15. Five-dimensional ultrasound system for soft tissue visualization.

    PubMed

    Deshmukh, Nishikant P; Caban, Jesus J; Taylor, Russell H; Hager, Gregory D; Boctor, Emad M

    2015-12-01

    A five-dimensional ultrasound (US) system is proposed as a real-time pipeline involving fusion of 3D B-mode data with the 3D ultrasound elastography (USE) data as well as visualization of these fused data and a real-time update capability over time for each consecutive scan. 3D B-mode data assist in visualizing the anatomy of the target organ, and 3D elastography data adds strain information. We investigate the feasibility of such a system and show that an end-to-end real-time system, from acquisition to visualization, can be developed. We present a system that consists of (a) a real-time 3D elastography algorithm based on a normalized cross-correlation (NCC) computation on a GPU; (b) real-time 3D B-mode acquisition and network transfer; (c) scan conversion of 3D elastography and B-mode volumes (if acquired by 4D wobbler probe); and (d) visualization software that fuses, visualizes, and updates 3D B-mode and 3D elastography data in real time. We achieved a speed improvement of 4.45-fold for the threaded version of the NCC-based 3D USE versus the non-threaded version. The maximum speed was 79 volumes/s for 3D scan conversion. In a phantom, we validated the dimensions of a 2.2-cm-diameter sphere scan-converted to B-mode volume. Also, we validated the 5D US system visualization transfer function and detected 1- and 2-cm spherical objects (phantom lesion). Finally, we applied the system to a phantom consisting of three lesions to delineate the lesions from the surrounding background regions of the phantom. A 5D US system is achievable with real-time performance. We can distinguish between hard and soft areas in a phantom using the transfer functions.

  16. Cerebrospinal fluid leakage in intracranial hypotension syndrome: usefulness of indirect findings in radionuclide cisternography for detection and treatment monitoring.

    PubMed

    Morioka, Tomoaki; Aoki, Takatoshi; Tomoda, Yoshinori; Takahashi, Hiroyuki; Kakeda, Shingo; Takeshita, Iwao; Ohno, Masato; Korogi, Yukunori

    2008-03-01

    To evaluate indirect findings of cerebrospinal fluid (CSF) leakage on radionuclide cisternography and their changes after treatment. This study was approved by the hospital's institutional review board and informed consent was obtained before each examination. A total of 67 patients who were clinically suspected of spontaneous intracranial hypotension (SIH) syndrome underwent radionuclide cisternography, and 27 patients who had direct findings of CSF leakage on radionuclide cisternography were selected for this evaluation. They were 16 males and 11 females, aged between 26 and 58 years. Sequential images of radionuclide cisternography were acquired at 1, 3, 5, and 24 hours after injection. We assessed the presence or absence of 4 indirect findings; early visualization of bladder activity, no visualization of activity over the brain convexities, rapid disappearance of spinal activity, and abnormal visualization of the root sleeves. Changes of the direct and indirect findings after treatment were also evaluated in 14 patients who underwent epidural blood patch treatment. Early visualization of bladder activity was found in all 27 patients. Seven of 27 (25.9%) patients showed no activity over the brain convexities. Rapid disappearance of spinal activity and abnormal root sleeve visualization were present in 2 (7.4%) and 5 (18.5%) patients, respectively. After epidural blood patch, both direct CSF leakage findings and indirect findings of early visualization of bladder activity had disappeared or improved in 12 of 14 patients (85.7%). The other indirect findings also disappeared after treatment in all cases. Indirect findings of radionuclide cisternography, especially early visualization of bladder activity, may be useful in the diagnosis and posttreatment follow-up of CSF leakage.

  17. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    PubMed

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  18. Coding Local and Global Binary Visual Features Extracted From Video Sequences

    NASA Astrophysics Data System (ADS)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.

  19. Automating X-ray Fluorescence Analysis for Rapid Astrobiology Surveys.

    PubMed

    Thompson, David R; Flannery, David T; Lanka, Ravi; Allwood, Abigail C; Bue, Brian D; Clark, Benton C; Elam, W Timothy; Estlin, Tara A; Hodyss, Robert P; Hurowitz, Joel A; Liu, Yang; Wade, Lawrence A

    2015-11-01

    A new generation of planetary rover instruments, such as PIXL (Planetary Instrument for X-ray Lithochemistry) and SHERLOC (Scanning Habitable Environments with Raman Luminescence for Organics and Chemicals) selected for the Mars 2020 mission rover payload, aim to map mineralogical and elemental composition in situ at microscopic scales. These instruments will produce large spectral cubes with thousands of channels acquired over thousands of spatial locations, a large potential science yield limited mainly by the time required to acquire a measurement after placement. A secondary bottleneck also faces mission planners after downlink; analysts must interpret the complex data products quickly to inform tactical planning for the next command cycle. This study demonstrates operational approaches to overcome these bottlenecks by specialized early-stage science data processing. Onboard, simple real-time systems can perform a basic compositional assessment, recognizing specific features of interest and optimizing sensor integration time to characterize anomalies. On the ground, statistically motivated visualization can make raw uncalibrated data products more interpretable for tactical decision making. Techniques such as manifold dimensionality reduction can help operators comprehend large databases at a glance, identifying trends and anomalies in data. These onboard and ground-side analyses can complement a quantitative interpretation. We evaluate system performance for the case study of PIXL, an X-ray fluorescence spectrometer. Experiments on three representative samples demonstrate improved methods for onboard and ground-side automation and illustrate new astrobiological science capabilities unavailable in previous planetary instruments. Dimensionality reduction-Planetary science-Visualization.

  20. Automatic thermographic scanning with the creation of 3D panoramic views of buildings

    NASA Astrophysics Data System (ADS)

    Ferrarini, G.; Cadelano, G.; Bortolin, A.

    2016-05-01

    Infrared thermography is widely applied to the inspection of building, enabling the identification of thermal anomalies due to the presence of hidden structures, air leakages, and moisture. One of the main advantages of this technique is the possibility to acquire rapidly a temperature map of a surface. However, due to the actual low-resolution of thermal camera and the necessity of scanning surfaces with different orientation, during a building survey it is necessary to take multiple images. In this work a device based on quantitative infrared thermography, called aIRview, has been applied during building surveys to automatically acquire thermograms with a camera mounted on a robotized pan tilt unit. The goal is to perform a first rapid survey of the building that could give useful information for the successive quantitative thermal investigations. For each data acquisition, the instrument covers a rotational field of view of 360° around the vertical axis and up to 180° around the horizontal one. The obtained images have been processed in order to create a full equirectangular projection of the ambient. For this reason the images have been integrated into a web visualization tool, working with web panorama viewers such as Google Street View, creating a webpage where it is possible to have a three dimensional virtual visit of the building. The thermographic data are embedded with the visual imaging and with other sensor data, facilitating the understanding of the physical phenomena underlying the temperature distribution.

  1. Contrast enhancement of subcutaneous blood vessel images by means of visible and near-infrared hyper-spectral imaging

    NASA Astrophysics Data System (ADS)

    Katrašnik, Jaka; Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2009-02-01

    Visualization of subcutaneous veins is very difficult with the naked eye, but important for diagnosis of medical conditions and different medical procedures such as catheter insertion and blood withdrawal. Moreover, recent studies showed that the images of subcutaneous veins could be used for biometric identification. The majority of methods used for enhancing the contrast between the subcutaneous veins and surrounding tissue are based on simple imaging systems utilizing CMOS or CCD cameras with LED illumination capable of acquiring images from the near infrared spectral region, usually near 900 nm. However, such simplified imaging methods cannot exploit the full potential of the spectral information. In this paper, a new highly versatile method for enhancing the contrast of subcutaneous veins based on state-of-the-art high-resolution hyper-spectral imaging system utilizing the spectral region from 550 to 1700 nm is presented. First, a detailed analysis of the contrast between the subcutaneous veins and the surrounding tissue as a function of wavelength, for several different positions on the human arm, was performed in order to extract the spectral regions with the highest contrast. The highest contrast images were acquired at 1100 nm, however, combining the individual images from the extracted spectral regions by the proposed contrast enhancement method resulted in a single image with up to ten-fold better contrast. Therefore, the proposed method has proved to be a useful tool for visualization of subcutaneous veins.

  2. The advanced magnetovision system for Smart application

    NASA Astrophysics Data System (ADS)

    Kaleta, Jerzy; Wiewiórski, Przemyslaw; Lewandowski, Daniel

    2010-04-01

    An original method, measurement devices and software tool for examination of magneto-mechanical phenomena in wide range of SMART applications is proposed. In many Hi-End market constructions it is necessary to carry out examinations of mechanical and magnetic properties simultaneously. Technological processes of fabrication of modern materials (for example cutting, premagnetisation and prestress) and advanced concept of using SMART structures involves the design of next generation system for optimization of electric and magnetic field distribution. The original fast and higher than million point static resolution scanner with mulitsensor probes has been constructed to measure full components of the magnetic field intensity vector H, and to visualize them into end user acceptable variant. The scanner has also the capability to acquire electric potentials on surface to work with magneto-piezo devices. Advanced electronic subsystems have been applied for processing of results in the Magscaner Vison System and the corresponding software - Maglab has been also evaluated. The Dipole Contour Method (DCM) is provided for modeling different states between magnetic and electric coupled materials and to visually explain the information of the experimental data. Dedicated software collaborating with industrial parametric systems CAD. Measurement technique consists of acquiring a cloud of points similarly as in tomography, 3D visualisation. The actually carried verification of abilities of 3D digitizer will enable inspection of SMART actuators with the cylindrical form, pellets with miniature sizes designed for oscillations dampers in various construction, for example in vehicle industry.

  3. Visualising large hierarchies with Flextree

    NASA Astrophysics Data System (ADS)

    Song, Hongzhi; Curran, Edwin P.; Sterritt, Roy

    2003-05-01

    One of the main tasks in Information Visualisation research is creating visual tools to facilitate human understanding of large and complex information spaces. Hierarchies, being a good mechanism in organising such information, are ubiquitous. Although much research effort has been spent on finding useful representations for hierarchies, visualising large hierarchies is still a difficult topic. One of the difficulties is how to show both tructure and node content information in one view. Another is how to achieve multiple foci in a focus+context visualisation. This paper describes a novel hierarchy visualisation technique called FlexTree to address these problems. It contains some important features that have not been exploited so far. In this visualisation, a profile or contour unique to the hierarchy being visualised can be gained in a histogram-like layout. A normalised view of a common attribute of all nodes can be acquired, and selection of this attribute is controllable by the user. Multiple foci are consistently accessible within a global context through interaction. Furthermore it can handle a large hierarchy that contains several thousand nodes in a PC environment. In addition results from an informal evaluation are also presented.

  4. Compiled visualization with IPI method for analysing of liquid liquid mixing process

    NASA Astrophysics Data System (ADS)

    Jasikova, Darina; Kotek, Michal; Kysela, Bohus; Sulc, Radek; Kopecky, Vaclav

    2018-06-01

    The article deals with the research of mixing process using visualization techniques and IPI method. Characteristics of the size distribution and the evolution of two liquid-liquid phase's disintegration were studied. A methodology has been proposed for visualization and image analysis of data acquired during the initial phase of the mixing process. IPI method was used for subsequent detailed study of the disintegrated droplets. The article describes advantages of usage of appropriate method, presents the limits of each method, and compares them.

  5. Falcon: A Temporal Visual Analysis System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A.

    2016-09-05

    Flexible visible exploration of long, high-resolution time series from multiple sensor streams is a challenge in several domains. Falcon is a visual analytics approach that helps researchers acquire a deep understanding of patterns in log and imagery data. Falcon allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations with multiple levels of detail. These capabilities are applicable to the analysis of any quantitative time series.

  6. Experimenter's Laboratory for Visualized Interactive Science

    NASA Technical Reports Server (NTRS)

    Hansen, Elaine R.; Rodier, Daniel R.; Klemp, Marjorie K.

    1994-01-01

    ELVIS (Experimenter's Laboratory for Visualized Interactive Science) is an interactive visualization environment that enables scientists, students, and educators to visualize and analyze large, complex, and diverse sets of scientific data. It accomplishes this by presenting the data sets as 2-D, 3-D, color, stereo, and graphic images with movable and multiple light sources combined with displays of solid-surface, contours, wire-frame, and transparency. By simultaneously rendering diverse data sets acquired from multiple sources, formats, and resolutions and by interacting with the data through an intuitive, direct-manipulation interface, ELVIS provides an interactive and responsive environment for exploratory data analysis.

  7. Anticipatory Attentional Suppression of Visual Features Indexed by Oscillatory Alpha-Band Power Increases: A High-Density Electrical Mapping Study

    PubMed Central

    Snyder, Adam C.; Foxe, John J.

    2010-01-01

    Retinotopically specific increases in alpha-band (~10 Hz) oscillatory power have been strongly implicated in the suppression of processing for irrelevant parts of the visual field during the deployment of visuospatial attention. Here, we asked whether this alpha suppression mechanism also plays a role in the nonspatial anticipatory biasing of feature-based attention. Visual word cues informed subjects what the task-relevant feature of an upcoming visual stimulus (S2) was, while high-density electroencephalographic recordings were acquired. We examined anticipatory oscillatory activity in the Cue-to-S2 interval (~2 s). Subjects were cued on a trial-by-trial basis to attend to either the color or direction of motion of an upcoming dot field array, and to respond when they detected that a subset of the dots differed from the majority along the target feature dimension. We used the features of color and motion, expressly because they have well known, spatially separated cortical processing areas, to distinguish shifts in alpha power over areas processing each feature. Alpha power from dorsal regions increased when motion was the irrelevant feature (i.e., color was cued), and alpha power from ventral regions increased when color was irrelevant. Thus, alpha-suppression mechanisms appear to operate during feature-based selection in much the same manner as has been shown for space-based attention. PMID:20237273

  8. Phantom experiments to improve parathyroid lesion detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Kenneth J.; Tronco, Gene G.; Tomas, Maria B.

    2007-12-15

    This investigation tested the hypothesis that visual analysis of iteratively reconstructed tomograms by ordered subset expectation maximization (OSEM) provides the highest accuracy for localizing parathyroid lesions using {sup 99m}Tc-sestamibi SPECT data. From an Institutional Review Board approved retrospective review of 531 patients evaluated for parathyroid localization, image characteristics were determined for 85 {sup 99m}Tc-sestamibi SPECT studies originally read as equivocal (EQ). Seventy-two plexiglas phantoms using cylindrical simulated lesions were acquired for a clinically realistic range of counts (mean simulated lesion counts of 75{+-}50 counts/pixel) and target-to-background (T:B) ratios (range=2.0 to 8.0) to determine an optimal filter for OSEM. Two experiencedmore » nuclear physicians graded simulated lesions, blinded to whether chambers contained radioactivity or plain water, and two observers used the same scale to read all phantom and clinical SPECT studies, blinded to pathology findings and clinical information. For phantom data and all clinical data, T:B analyses were not statistically different for OSEM versus FB, but visual readings were significantly more accurate than T:B (88{+-}6% versus 68{+-}6%, p=0.001) for OSEM processing, and OSEM was significantly more accurate than FB for visual readings (88{+-}6% versus 58{+-}6%, p<0.0001). These data suggest that visual analysis of iteratively reconstructed MIBI tomograms should be incorporated into imaging protocols performed to localize parathyroid lesions.« less

  9. Autonomous Diagnostic Imaging Performed by Untrained Operators using Augmented Reality as a Form of "Just-in-Time" Training

    NASA Technical Reports Server (NTRS)

    Martin, D. S.; Wang, L.; Laurie, S. S.; Lee, S. M. C.; Fleischer, A. C.; Gibson, C. R.; Stenger, M. B.

    2017-01-01

    We will address the Human Factors and Performance Team, "Risk of performance errors due to training deficiencies" by improving the JIT training materials for ultrasound and OCT imaging by providing advanced guidance in a detailed, timely, and user-friendly manner. Specifically, we will (1) develop an audio-visual tutorial using AR that guides non-experts through an abdominal trauma ultrasound protocol; (2) develop an audio-visual tutorial using AR to guide an untrained operator through the acquisition of OCT images; (3) evaluate the quality of abdominal ultrasound and OCT images acquired by untrained operators using AR guidance compared to images acquired using traditional JIT techniques (laptop-based training conducted before image acquisition); and (4) compare the time required to complete imaging studies using AR tutorials with images acquired using current JIT practices to identify areas for time efficiency improvements. Two groups of subjects will be recruited to participate in this study. Operator-subjects, without previous experience in ultrasound or OCT, will be asked to perform both procedures using either the JIT training with AR technology or the traditional JIT training via laptop. Images acquired by inexperienced operator-subjects will be scored by experts in that imaging modality for diagnostic and research quality; experts will be blinded to the form of JIT used to acquire the images. Operator-subjects also will be asked to submit feedback to improve the training modules used during the scans to improve future training modules. Scanned-subjects will be a small group individuals from whom all images will be acquired.

  10. Autonomous Diagnostic Imaging Performed by Untrained Operator Using Augmented Reality as a Form of "Just-in-Time" Training

    NASA Technical Reports Server (NTRS)

    Martin, David S.; Wang, Lui; Laurie, Steven S.; Lee, Stuart M. C.; Stenger, Michael B.

    2017-01-01

    We will address the Human Factors and Performance Team, "Risk of performance errors due to training deficiencies" by improving the JIT training materials for ultrasound and OCT imaging by providing advanced guidance in a detailed, timely, and user-friendly manner. Specifically, we will (1) develop an audio-visual tutorial using AR that guides non-experts through an abdominal trauma ultrasound protocol; (2) develop an audio-visual tutorial using AR to guide an untrained operator through the acquisition of OCT images; (3) evaluate the quality of abdominal ultrasound and OCT images acquired by untrained operators using AR guidance compared to images acquired using traditional JIT techniques (laptop-based training conducted before image acquisition); and (4) compare the time required to complete imaging studies using AR tutorials with images acquired using current JIT practices to identify areas for time efficiency improvements. Two groups of subjects will be recruited to participate in this study. Operator-subjects, without previous experience in ultrasound or OCT, will be asked to perform both procedures using either the JIT training with AR technology or the traditional JIT training via laptop. Images acquired by inexperienced operator-subjects will be scored by experts in that imaging modality for diagnostic and research quality; experts will be blinded to the form of JIT used to acquire the images. Operator-subjects also will be asked to submit feedback to improve the training modules used during the scans to improve future training modules. Scanned-subjects will be a small group individuals from whom all images will be acquired.

  11. Mapping with Small UAS: A Point Cloud Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Toth, Charles; Jozkow, Grzegorz; Grejner-Brzezinska, Dorota

    2015-12-01

    Interest in using inexpensive Unmanned Aerial System (UAS) technology for topographic mapping has recently significantly increased. Small UAS platforms equipped with consumer grade cameras can easily acquire high-resolution aerial imagery allowing for dense point cloud generation, followed by surface model creation and orthophoto production. In contrast to conventional airborne mapping systems, UAS has limited ground coverage due to low flying height and limited flying time, yet it offers an attractive alternative to high performance airborne systems, as the cost of the sensors and platform, and the flight logistics, is relatively low. In addition, UAS is better suited for small area data acquisitions and to acquire data in difficult to access areas, such as urban canyons or densely built-up environments. The main question with respect to the use of UAS is whether the inexpensive consumer sensors installed in UAS platforms can provide the geospatial data quality comparable to that provided by conventional systems. This study aims at the performance evaluation of the current practice of UAS-based topographic mapping by reviewing the practical aspects of sensor configuration, georeferencing and point cloud generation, including comparisons between sensor types and processing tools. The main objective is to provide accuracy characterization and practical information for selecting and using UAS solutions in general mapping applications. The analysis is based on statistical evaluation as well as visual examination of experimental data acquired by a Bergen octocopter with three different image sensor configurations, including a GoPro HERO3+ Black Edition, a Nikon D800 DSLR and a Velodyne HDL-32. In addition, georeferencing data of varying quality were acquired and evaluated. The optical imagery was processed by using three commercial point cloud generation tools. Comparing point clouds created by active and passive sensors by using different quality sensors, and finally, by different commercial software tools, provides essential information for the performance validation of UAS technology.

  12. Perceptions of Older Veterans with Visual Impairments Regarding Computer Access Training and Quality of Life

    ERIC Educational Resources Information Center

    DuBosque, Richard Stanborough

    2013-01-01

    The widespread integration of the computer into the mainstream of daily life presents a challenge to various sectors of society, and the incorporation of this technology into the realm of the older individual with visual impairments is a relatively uncharted field of study. This study was undertaken to acquire the perceptions of the impact of the…

  13. 30 CFR 280.72 - What procedure will MMS follow to disclose acquired data and information to a contractor for...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... acquired data and information to a contractor for reproduction, processing, and interpretation? 280.72... information to an independent contractor or agent for reproduction, processing, and interpretation. (b) The... Protections § 280.72 What procedure will MMS follow to disclose acquired data and information to a contractor...

  14. Industry and Academic Consortium for Computer Based Subsurface Geology Laboratory

    NASA Astrophysics Data System (ADS)

    Brown, A. L.; Nunn, J. A.; Sears, S. O.

    2008-12-01

    Twenty two licenses for Petrel Software acquired through a grant from Schlumberger are being used to redesign the laboratory portion of Subsurface Geology at Louisiana State University. The course redesign is a cooperative effort between LSU's Geology and Geophysics and Petroleum Engineering Departments and Schlumberger's Technical Training Division. In spring 2008, two laboratory sections were taught with 22 students in each section. The class contained geology majors, petroleum engineering majors, and geology graduate students. Limited enrollments and 3 hour labs make it possible to incorporate hands-on visualization, animation, manipulation of data and images, and access to geological data available online. 24/7 access to the laboratory and step by step instructions for Petrel exercises strongly promoted peer instruction and individual learning. Goals of the course redesign include: enhancing visualization of earth materials; strengthening student's ability to acquire, manage, and interpret multifaceted geological information; fostering critical thinking, the scientific method; improving student communication skills; providing cross training between geologists and engineers and increasing the quantity, quality, and diversity of students pursuing Earth Science and Petroleum Engineering careers. IT resources available in the laboratory provide students with sophisticated visualization tools, allowing them to switch between 2-D and 3-D reconstructions more seamlessly, and enabling them to manipulate larger integrated data-sets, thus permitting more time for critical thinking and hypothesis testing. IT resources also enable faculty and students to simultaneously work with the software to visually interrogate a 3D data set and immediately test hypothesis formulated in class. Preliminary evaluation of class results indicate that students found MS-Windows based Petrel easy to learn. By the end of the semester, students were able to not only map horizons and faults using seismic and well data but also compute volumetrics. Exam results indicated that while students could complete sophisticated exercises using the software, their understanding of key concepts such as conservation of volume in a palinspastic reconstruction or association of structures with a particular stress regime was limited. Future classes will incorporate more paper and pencil exercises to illustrate basic concepts. The equipment, software, and exercises developed will be used in additional upper level undergraduate and graduate classes.

  15. Discrimination of Complex Human Behavior by Pigeons (Columba livia) and Humans

    PubMed Central

    Qadri, Muhammad A. J.; Sayde, Justin M.; Cook, Robert G.

    2014-01-01

    The cognitive and neural mechanisms for recognizing and categorizing behavior are not well understood in non-human animals. In the current experiments, pigeons and humans learned to categorize two non-repeating, complex human behaviors (“martial arts” vs. “Indian dance”). Using multiple video exemplars of a digital human model, pigeons discriminated these behaviors in a go/no-go task and humans in a choice task. Experiment 1 found that pigeons already experienced with discriminating the locomotive actions of digital animals acquired the discrimination more rapidly when action information was available than when only pose information was available. Experiments 2 and 3 found this same dynamic superiority effect with naïve pigeons and human participants. Both species used the same combination of immediately available static pose information and more slowly perceived dynamic action cues to discriminate the behavioral categories. Theories based on generalized visual mechanisms, as opposed to embodied, species-specific action networks, offer a parsimonious account of how these different animals recognize behavior across and within species. PMID:25379777

  16. Information-Theoretic Assessment of Sample Imaging Systems

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Alter-Gartenberg, Rachel; Park, Stephen K.; Rahman, Zia-ur

    1999-01-01

    By rigorously extending modern communication theory to the assessment of sampled imaging systems, we develop the formulations that are required to optimize the performance of these systems within the critical constraints of image gathering, data transmission, and image display. The goal of this optimization is to produce images with the best possible visual quality for the wide range of statistical properties of the radiance field of natural scenes that one normally encounters. Extensive computational results are presented to assess the performance of sampled imaging systems in terms of information rate, theoretical minimum data rate, and fidelity. Comparisons of this assessment with perceptual and measurable performance demonstrate that (1) the information rate that a sampled imaging system conveys from the captured radiance field to the observer is closely correlated with the fidelity, sharpness and clarity with which the observed images can be restored and (2) the associated theoretical minimum data rate is closely correlated with the lowest data rate with which the acquired signal can be encoded for efficient transmission.

  17. eFarm: A Tool for Better Observing Agricultural Land Systems

    PubMed Central

    Yu, Qiangyi; Shi, Yun; Tang, Huajun; Yang, Peng; Xie, Ankun; Liu, Bin; Wu, Wenbin

    2017-01-01

    Currently, observations of an agricultural land system (ALS) largely depend on remotely-sensed images, focusing on its biophysical features. While social surveys capture the socioeconomic features, the information was inadequately integrated with the biophysical features of an ALS and the applications are limited due to the issues of cost and efficiency to carry out such detailed and comparable social surveys at a large spatial coverage. In this paper, we introduce a smartphone-based app, called eFarm: a crowdsourcing and human sensing tool to collect the geotagged ALS information at the land parcel level, based on the high resolution remotely-sensed images. We illustrate its main functionalities, including map visualization, data management, and data sensing. Results of the trial test suggest the system works well. We believe the tool is able to acquire the human–land integrated information which is broadly-covered and timely-updated, thus presenting great potential for improving sensing, mapping, and modeling of ALS studies. PMID:28245554

  18. Comparison of onboard low-field magnetic resonance imaging versus onboard computed tomography for anatomy visualization in radiotherapy.

    PubMed

    Noel, Camille E; Parikh, Parag J; Spencer, Christopher R; Green, Olga L; Hu, Yanle; Mutic, Sasa; Olsen, Jeffrey R

    2015-01-01

    Onboard magnetic resonance imaging (OB-MRI) for daily localization and adaptive radiotherapy has been under development by several groups. However, no clinical studies have evaluated whether OB-MRI improves visualization of the target and organs at risk (OARs) compared to standard onboard computed tomography (OB-CT). This study compared visualization of patient anatomy on images acquired on the MRI-(60)Co ViewRay system to those acquired with OB-CT. Fourteen patients enrolled on a protocol approved by the Institutional Review Board (IRB) and undergoing image-guided radiotherapy for cancer in the thorax (n = 2), pelvis (n = 6), abdomen (n = 3) or head and neck (n = 3) were imaged with OB-MRI and OB-CT. For each of the 14 patients, the OB-MRI and OB-CT datasets were displayed side-by-side and independently reviewed by three radiation oncologists. Each physician was asked to evaluate which dataset offered better visualization of the target and OARs. A quantitative contouring study was performed on two abdominal patients to assess if OB-MRI could offer improved inter-observer segmentation agreement for adaptive planning. In total 221 OARs and 10 targets were compared for visualization on OB-MRI and OB-CT by each of the three physicians. The majority of physicians (two or more) evaluated visualization on MRI as better for 71% of structures, worse for 10% of structures, and equivalent for 14% of structures. 5% of structures were not visible on either. Physicians agreed unanimously for 74% and in majority for > 99% of structures. Targets were better visualized on MRI in 4/10 cases, and never on OB-CT. Low-field MR provides better anatomic visualization of many radiotherapy targets and most OARs as compared to OB-CT. Further studies with OB-MRI should be pursued.

  19. Visual social network analysis: effective approach to model complex human social, behaviour & culture.

    PubMed

    Ahram, Tareq Z; Karwowski, Waldemar

    2012-01-01

    The advent and adoption of internet-based social networking has significantly altered our daily lives. The educational community has taken notice of the positive aspects of social networking such as creation of blogs and to support groups of system designers going through the same challenges and difficulties. This paper introduces a social networking framework for collaborative education, design and modeling of the next generation of smarter products and services. Human behaviour modeling in social networking application aims to ensure that human considerations for learners and designers have a prominent place in the integrated design and development of sustainable, smarter products throughout the total system lifecycle. Social networks blend self-directed learning and prescribed, existing information. The self-directed element creates interest within a learner and the ability to access existing information facilitates its transfer, and eventual retention of knowledge acquired.

  20. The role of line junctions in object recognition: The case of reading musical notation.

    PubMed

    Wong, Yetta Kwailing; Wong, Alan C-N

    2018-04-30

    Previous work has shown that line junctions are informative features for visual perception of objects, letters, and words. However, the sources of such sensitivity and their generalizability to other object categories are largely unclear. We addressed these questions by studying perceptual expertise in reading musical notation, a domain in which individuals with different levels of expertise are readily available. We observed that removing line junctions created by the contact between musical notes and staff lines selectively impaired recognition performance in experts and intermediate readers, but not in novices. The degree of performance impairment was predicted by individual fluency in reading musical notation. Our findings suggest that line junctions provide diagnostic information about object identity across various categories, including musical notation. However, human sensitivity to line junctions does not readily transfer from familiar to unfamiliar object categories, and has to be acquired through perceptual experience with the specific objects.

  1. Predicting the Sunspot Cycle

    NASA Technical Reports Server (NTRS)

    Hathaway, David H.

    2009-01-01

    The 11-year sunspot cycle was discovered by an amateur astronomer in 1844. Visual and photographic observations of sunspots have been made by both amateurs and professionals over the last 400 years. These observations provide key statistical information about the sunspot cycle that do allow for predictions of future activity. However, sunspots and the sunspot cycle are magnetic in nature. For the last 100 years these magnetic measurements have been acquired and used exclusively by professional astronomers to gain new information about the nature of the solar activity cycle. Recently, magnetic dynamo models have evolved to the stage where they can assimilate past data and provide predictions. With the advent of the Internet and open data policies, amateurs now have equal access to the same data used by professionals and equal opportunities to contribute (but, alas, without pay). This talk will describe some of the more useful prediction techniques and reveal what they say about the intensity of the upcoming sunspot cycle.

  2. The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex

    PubMed Central

    Leibo, Joel Z.; Liao, Qianli; Anselmi, Fabio; Poggio, Tomaso

    2015-01-01

    Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to new objects that share properties with the old, then the recognition system’s optimal organization must be one containing specialized modules for different object classes. Our analysis starts from a premise we call the invariance hypothesis: that the computational goal of the ventral stream is to compute an invariant-to-transformations and discriminative signature for recognition. The key condition enabling approximate transfer of invariance without sacrificing discriminability turns out to be that the learned and novel objects transform similarly. This implies that the optimal recognition system must contain subsystems trained only with data from similarly-transforming objects and suggests a novel interpretation of domain-specific regions like the fusiform face area (FFA). Furthermore, we can define an index of transformation-compatibility, computable from videos, that can be combined with information about the statistics of natural vision to yield predictions for which object categories ought to have domain-specific regions in agreement with the available data. The result is a unifying account linking the large literature on view-based recognition with the wealth of experimental evidence concerning domain-specific regions. PMID:26496457

  3. The Effect of Radiation on Selected Photographic Film

    NASA Technical Reports Server (NTRS)

    Slater, Richard; Kinard, John; Firsov, Ivan

    2000-01-01

    We conducted this film test to evaluate several manufacturers' photographic films for their ability to acquire imagery on the International Space Station. We selected 25 motion picture, photographic slide, and negative films from three different film manufacturers. We based this selection on the fact that their films ranked highest in other similar film tests, and on their general acceptance by the international community. This test differed from previous tests because the entire evaluation process leading up to the final selection was based on information derived after the original flight film was scanned to a digital file. Previously conducted tests were evaluated entirely based on 8 x 10s that were produced from the film either directly or through the internegative process. This new evaluation procedure provided accurate quantitative data on granularity and contrast from the digital data. This test did not try to define which film was best visually. This is too often based on personal preference. However, the test results did group the films by good, marginal, and unacceptable. We developed, and included in this report, a template containing quantitative, graphical, and visual information for each film. These templates should be sufficient for comparing the different films tested and subsequently selecting a film or films to be used for experiments and general documentation on the International Space Station.

  4. Acquisition and visualization techniques for narrow spectral color imaging.

    PubMed

    Neumann, László; García, Rafael; Basa, János; Hegedüs, Ramón

    2013-06-01

    This paper introduces a new approach in narrow-band imaging (NBI). Existing NBI techniques generate images by selecting discrete bands over the full visible spectrum or an even wider spectral range. In contrast, here we perform the sampling with filters covering a tight spectral window. This image acquisition method, named narrow spectral imaging, can be particularly useful when optical information is only available within a narrow spectral window, such as in the case of deep-water transmittance, which constitutes the principal motivation of this work. In this study we demonstrate the potential of the proposed photographic technique on nonunderwater scenes recorded under controlled conditions. To this end three multilayer narrow bandpass filters were employed, which transmit at 440, 456, and 470 nm bluish wavelengths, respectively. Since the differences among the images captured in such a narrow spectral window can be extremely small, both image acquisition and visualization require a novel approach. First, high-bit-depth images were acquired with multilayer narrow-band filters either placed in front of the illumination or mounted on the camera lens. Second, a color-mapping method is proposed, using which the input data can be transformed onto the entire display color gamut with a continuous and perceptually nearly uniform mapping, while ensuring optimally high information content for human perception.

  5. PeptideDepot: Flexible Relational Database for Visual Analysis of Quantitative Proteomic Data and Integration of Existing Protein Information

    PubMed Central

    Yu, Kebing; Salomon, Arthur R.

    2010-01-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through tandem mass spectrometry (MS/MS). Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to a variety of experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our High Throughput Autonomous Proteomic Pipeline (HTAPP) used in the automated acquisition and post-acquisition analysis of proteomic data. PMID:19834895

  6. A new integrated dual time-point amyloid PET/MRI data analysis method.

    PubMed

    Cecchin, Diego; Barthel, Henryk; Poggiali, Davide; Cagnin, Annachiara; Tiepolt, Solveig; Zucchetta, Pietro; Turco, Paolo; Gallo, Paolo; Frigo, Anna Chiara; Sabri, Osama; Bui, Franco

    2017-11-01

    In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ( 18 F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative "dual time-point" indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between age and the indexes of the new dual time-point amyloid imaging method in amyloid-negative patients. The method can be considered a valuable tool in both routine clinical practice and in the research setting as it will standardize data regarding amyloid deposition. It could potentially also be used to identify early amyloid plaque deposition in younger subjects in whom treatment could theoretically be more effective.

  7. How do they make it look so easy? The expert orienteer's cognitive advantage.

    PubMed

    Eccles, David W; Arsal, Guler

    2015-01-01

    Expertise in sport can appear so extraordinary that it is difficult to imagine how "normal" individuals may achieve it. However, in this review, we show that experts in the sport of orienteering, which requires on-foot navigation using map and compass through wild terrain, can make the difficult look easy because they have developed a cognitive advantage. Specifically, they have acquired knowledge of cognitive and behavioural strategies that allow them to circumvent natural limitations on attention. Cognitive strategies include avoiding peaks of demand on attention by distributing the processing of map information over time and reducing the need to attend to the map by simplifying the navigation required to complete a race. Behavioural strategies include reducing the visual search required of the map by physically arranging and rearranging the map display during races. It is concluded that expertise in orienteering can be partly attributed to the circumvention of natural limitations on attention achieved via the employment of acquired cognitive and behavioural strategies. Thus, superior performance in sport may not be the possession of only a privileged few; it may be available to all aspiring athletes.

  8. A software tool for automatic classification and segmentation of 2D/3D medical images

    NASA Astrophysics Data System (ADS)

    Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur

    2013-02-01

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.

  9. 77 FR 323 - Agency Information Collection (Application in Acquiring Specially Adapted Housing or Special Home...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-04

    ... (Application in Acquiring Specially Adapted Housing or Special Home Adaptation Grant) Activity Under OMB Review....'' SUPPLEMENTARY INFORMATION: Title: Application in Acquiring Specially Adapted Housing or Special Home Adaptation... assistance in acquiring specially adapted housing or the special home adaptation grant. VA will use the data...

  10. Reading direction and the central perceptual span in Urdu and English.

    PubMed

    Paterson, Kevin B; McGowan, Victoria A; White, Sarah J; Malik, Sameen; Abedipour, Lily; Jordan, Timothy R

    2014-01-01

    Normal reading relies on the reader making a series of saccadic eye movements along lines of text, separated by brief fixational pauses during which visual information is acquired from a region of text. In English and other alphabetic languages read from left to right, the region from which useful information is acquired during each fixational pause is generally reported to extend further to the right of each fixation than to the left. However, the asymmetry of the perceptual span for alphabetic languages read in the opposite direction (i.e., from right to left) has received much less attention. Accordingly, in order to more fully investigate the asymmetry in the perceptual span for these languages, the present research assessed the influence of reading direction on the perceptual span for bilingual readers of Urdu and English. Text in Urdu and English was presented either entirely as normal or in a gaze-contingent moving-window paradigm in which a region of text was displayed as normal at the reader's point of fixation and text outside this region was obscured. The windows of normal text extended symmetrically 0.5° of visual angle to the left and right of fixation, or asymmetrically by increasing the size of each window to 1.5° or 2.5° to either the left or right of fixation. When participants read English, performance for the window conditions was superior when windows extended to the right. However, when reading Urdu, performance was superior when windows extended to the left, and was essentially the reverse of that observed for English. These findings provide a novel indication that the perceptual span is modified by the language being read to produce an asymmetry in the direction of reading and show for the first time that such an asymmetry occurs for reading Urdu.

  11. Automatic acquisition of motion trajectories: tracking hockey players

    NASA Astrophysics Data System (ADS)

    Okuma, Kenji; Little, James J.; Lowe, David

    2003-12-01

    Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.

  12. Three-Year-Old Photographers: Educational and Parental Mediation as a Basis for Visual Literacy via Digital Photography in Early Childhood

    ERIC Educational Resources Information Center

    Friedman, Arielle

    2016-01-01

    The study examines two years of an educational program for children aged three to four, based on the use of digital cameras. It assesses the program's effects on the children and adults involved in the project, and explores how they help the youngsters acquire visual literacy. Operating under the assumption that formal curricula usually…

  13. Anaphora Resolution and Reanalysis during L2 Sentence Processing: Evidence from the Visual World Paradigm

    ERIC Educational Resources Information Center

    Cunnings, Ian; Fotiadou, Georgia; Tsimpli, Ianthi

    2017-01-01

    In a visual world paradigm study, we manipulated gender congruence between a subject pronoun and two antecedents to investigate whether second language (L2) learners with a null subject first language (L1) acquire and process overt subject pronouns in a nonnull subject L2 in a nativelike way. We also investigated whether L2 speakers revise an…

  14. Remote sensing with simulated unmanned aircraft imagery for precision agriculture applications

    USGS Publications Warehouse

    Hunt, E. Raymond; Daughtry, Craig S.T.; Mirsky, Steven B.; Hively, W. Dean

    2014-01-01

    An important application of unmanned aircraft systems (UAS) may be remote-sensing for precision agriculture, because of its ability to acquire images with very small pixel sizes from low altitude flights. The objective of this study was to compare information obtained from two different pixel sizes, one about a meter (the size of a small vegetation plot) and one about a millimeter. Cereal rye (Secale cereale) was planted at the Beltsville Agricultural Research Center for a winter cover crop with fall and spring fertilizer applications, which produced differences in biomass and leaf chlorophyll content. UAS imagery was simulated by placing a Fuji IS-Pro UVIR digital camera at 3-m height looking nadir. An external UV-IR cut filter was used to acquire true-color images; an external red cut filter was used to obtain color-infrared-like images with bands at near-infrared, green, and blue wavelengths. Plot-scale Green Normalized Difference Vegetation Index was correlated with dry aboveground biomass ( ${mbi {r}} = 0.58$ ), whereas the Triangular Greenness Index (TGI) was not correlated with chlorophyll content. We used the SamplePoint program to select 100 pixels systematically; we visually identified the cover type and acquired the digital numbers. The number of rye pixels in each image was better correlated with biomass ( ${mbi {r}} = 0.73$ ), and the average TGI from only leaf pixels was negatively correlated with chlorophyll content ( ${mbi {r}} = -0.72$ ). Thus, better information for crop requirements may be obtained using very small pixel sizes, but new algorithms based on computer vision are needed for analysis. It may not be necessary to geospatially register large numbers of photographs with very small pixel sizes. Instead, images could be analyzed as single plots along field transects.

  15. Application of advanced computing techniques to the analysis and display of space science measurements

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Lapolla, M. V.; Horblit, B.

    1995-01-01

    A prototype system has been developed to aid the experimental space scientist in the display and analysis of spaceborne data acquired from direct measurement sensors in orbit. We explored the implementation of a rule-based environment for semi-automatic generation of visualizations that assist the domain scientist in exploring one's data. The goal has been to enable rapid generation of visualizations which enhance the scientist's ability to thoroughly mine his data. Transferring the task of visualization generation from the human programmer to the computer produced a rapid prototyping environment for visualizations. The visualization and analysis environment has been tested against a set of data obtained from the Hot Plasma Composition Experiment on the AMPTE/CCE satellite creating new visualizations which provided new insight into the data.

  16. Multimodal Neuroimaging in Schizophrenia: Description and Dissemination.

    PubMed

    Aine, C J; Bockholt, H J; Bustillo, J R; Cañive, J M; Caprihan, A; Gasparovic, C; Hanlon, F M; Houck, J M; Jung, R E; Lauriello, J; Liu, J; Mayer, A R; Perrone-Bizzozero, N I; Posse, S; Stephen, J M; Turner, J A; Clark, V P; Calhoun, Vince D

    2017-10-01

    In this paper we describe an open-access collection of multimodal neuroimaging data in schizophrenia for release to the community. Data were acquired from approximately 100 patients with schizophrenia and 100 age-matched controls during rest as well as several task activation paradigms targeting a hierarchy of cognitive constructs. Neuroimaging data include structural MRI, functional MRI, diffusion MRI, MR spectroscopic imaging, and magnetoencephalography. For three of the hypothesis-driven projects, task activation paradigms were acquired on subsets of ~200 volunteers which examined a range of sensory and cognitive processes (e.g., auditory sensory gating, auditory/visual multisensory integration, visual transverse patterning). Neuropsychological data were also acquired and genetic material via saliva samples were collected from most of the participants and have been typed for both genome-wide polymorphism data as well as genome-wide methylation data. Some results are also presented from the individual studies as well as from our data-driven multimodal analyses (e.g., multimodal examinations of network structure and network dynamics and multitask fMRI data analysis across projects). All data will be released through the Mind Research Network's collaborative informatics and neuroimaging suite (COINS).

  17. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  18. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  19. Virtual guidance as a tool to obtain diagnostic ultrasound for spaceflight and remote environments.

    PubMed

    Martin, David S; Caine, Timothy L; Matz, Timothy; Lee, Stuart M C; Stenger, Michael B; Sargsyan, Ashot E; Platts, Steven H

    2012-10-01

    With missions planned to travel greater distances from Earth at ranges that make real-time two-way communication impractical, astronauts will be required to perform autonomous medical diagnostic procedures during future exploration missions. Virtual guidance is a form of just-in-time training developed to allow novice ultrasound operators to acquire diagnostically-adequate images of clinically relevant anatomical structures using a prerecorded audio/visual tutorial viewed in real-time. Individuals without previous experience in ultrasound were recruited to perform carotid artery (N = 10) and ophthalmic (N = 9) ultrasound examinations using virtual guidance as their only training tool. In the carotid group, each untrained operator acquired two-dimensional, pulsed and color Doppler of the carotid artery. In the ophthalmic group, operators acquired representative images of the anterior chamber of the eye, retina, optic nerve, and nerve sheath. Ultrasound image quality was evaluated by independent imaging experts. Of the studies, 8 of the 10 carotid and 17 of 18 of the ophthalmic images (2 images collected per study) were judged to be diagnostically adequate. The quality of all but one of the ophthalmic images ranged from adequate to excellent. Diagnostically-adequate carotid and ophthalmic ultrasound examinations can be obtained by previously untrained operators with assistance from only an audio/video tutorial viewed in real time while scanning. This form of just-in-time training, which can be applied to other examinations, represents an opportunity to acquire important information for NASA flight surgeons and researchers when trained medical personnel are not available or when remote guidance is impractical.

  20. Silk wrapping of nuptial gifts as visual signal for female attraction in a crepuscular spider

    NASA Astrophysics Data System (ADS)

    Trillo, Mariana C.; Melo-González, Valentina; Albo, Maria J.

    2014-02-01

    An extensive diversity of nuptial gifts is known in invertebrates, but prey wrapped in silk is a unique type of gift present in few insects and spiders. Females from spider species prefer males offering a gift accepting more and longer matings than when males offered no gift. Silk wrapping of the gift is not essential to obtain a mating, but appears to increase the chance of a mating evidencing a particularly intriguing function of this trait. Consequently, as other secondary sexual traits, silk wrapping may be an important trait under sexual selection, if it is used by females as a signal providing information on male quality. We aimed to understand whether the white color of wrapped gifts is used as visual signal during courtship in the spider Paratrechalea ornata. We studied if a patch of white paint on the males' chelicerae is attractive to females by exposing females to males: with their chelicerae painted white; without paint; and with the sternum painted white (paint control). Females contacted males with white chelicerae more often and those males obtained higher mating success than other males. Thereafter, we explored whether silk wrapping is a condition-dependent trait and drives female visual attraction. We exposed good and poor condition males, carrying a prey, to the female silk. Males in poor condition added less silk to the prey than males in good condition, indicating that gift wrapping is an indicator of male quality and may be used by females to acquire information of the potential mate.

  1. Alpha shape theory for 3D visualization and volumetric measurement of brain tumor progression using magnetic resonance images.

    PubMed

    Hamoud Al-Tamimi, Mohammed Sabbih; Sulong, Ghazali; Shuaib, Ibrahim Lutfi

    2015-07-01

    Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Increased phase synchronization during continuous face integration measured simultaneously with EEG and fMRI.

    PubMed

    Kottlow, Mara; Jann, Kay; Dierks, Thomas; Koenig, Thomas

    2012-08-01

    Gamma zero-lag phase synchronization has been measured in the animal brain during visual binding. Human scalp EEG studies used a phase locking factor (trial-to-trial phase-shift consistency) or gamma amplitude to measure binding but did not analyze common-phase signals so far. This study introduces a method to identify networks oscillating with near zero-lag phase synchronization in human subjects. We presented unpredictably moving face parts (NOFACE) which - during some periods - produced a complete schematic face (FACE). The amount of zero-lag phase synchronization was measured using global field synchronization (GFS). GFS provides global information on the amount of instantaneous coincidences in specific frequencies throughout the brain. Gamma GFS was increased during the FACE condition. To localize the underlying areas, we correlated gamma GFS with simultaneously recorded BOLD responses. Positive correlates comprised the bilateral middle fusiform gyrus and the left precuneus. These areas may form a network of areas transiently synchronized during face integration, including face-specific as well as binding-specific regions and regions for visual processing in general. Thus, the amount of zero-lag phase synchronization between remote regions of the human visual system can be measured with simultaneously acquired EEG/fMRI. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. In-line monitoring of pellet coating thickness growth by means of visual imaging.

    PubMed

    Oman Kadunc, Nika; Sibanc, Rok; Dreu, Rok; Likar, Boštjan; Tomaževič, Dejan

    2014-08-15

    Coating thickness is the most important attribute of coated pharmaceutical pellets as it directly affects release profiles and stability of the drug. Quality control of the coating process of pharmaceutical pellets is thus of utmost importance for assuring the desired end product characteristics. A visual imaging technique is presented and examined as a process analytic technology (PAT) tool for noninvasive continuous in-line and real time monitoring of coating thickness of pharmaceutical pellets during the coating process. Images of pellets were acquired during the coating process through an observation window of a Wurster coating apparatus. Image analysis methods were developed for fast and accurate determination of pellets' coating thickness during a coating process. The accuracy of the results for pellet coating thickness growth obtained in real time was evaluated through comparison with an off-line reference method and a good agreement was found. Information about the inter-pellet coating uniformity was gained from further statistical analysis of the measured pellet size distributions. Accuracy and performance analysis of the proposed method showed that visual imaging is feasible as a PAT tool for in-line and real time monitoring of the coating process of pharmaceutical pellets. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Stimulus-related activity during conditional associations in monkey perirhinal cortex neurons depends on upcoming reward outcome.

    PubMed

    Ohyama, Kaoru; Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Shidara, Munetaka; Sato, Chikara

    2012-11-28

    Acquiring the significance of events based on reward-related information is critical for animals to survive and to conduct social activities. The importance of the perirhinal cortex for reward-related information processing has been suggested. To examine whether or not neurons in this cortex represent reward information flexibly when a visual stimulus indicates either a rewarded or unrewarded outcome, neuronal activity in the macaque perirhinal cortex was examined using a conditional-association cued-reward task. The task design allowed us to study how the neuronal responses depended on the animal's prediction of whether it would or would not be rewarded. Two visual stimuli, a color stimulus as Cue1 followed by a pattern stimulus as Cue2, were sequentially presented. Each pattern stimulus was conditionally associated with both rewarded and unrewarded outcomes depending on the preceding color stimulus. We found an activity depending upon the two reward conditions during Cue2, i.e., pattern stimulus presentation. The response appeared after the response dependent upon the image identity of Cue2. The response delineating a specific cue sequence also appeared between the responses dependent upon the identity of Cue2 and reward conditions. Thus, when Cue1 sets the context for whether or not Cue2 indicates a reward, this region represents the meaning of Cue2, i.e., the reward conditions, independent of the identity of Cue2. These results suggest that neurons in the perirhinal cortex do more than associate a single stimulus with a reward to achieve flexible representations of reward information.

  5. Automated determination of wakefulness and sleep in rats based on non-invasively acquired measures of movement and respiratory activity

    PubMed Central

    Zeng, Tao; Mott, Christopher; Mollicone, Daniel; Sanford, Larry D.

    2012-01-01

    The current standard for monitoring sleep in rats requires labor intensive surgical procedures and the implantation of chronic electrodes which have the potential to impact behavior and sleep. With the goal of developing a non-invasive method to determine sleep and wakefulness, we constructed a non-contact monitoring system to measure movement and respiratory activity using signals acquired with pulse Doppler radar and from digitized video analysis. A set of 23 frequency and time-domain features were derived from these signals and were calculated in 10 s epochs. Based on these features, a classification method for automated scoring of wakefulness, non-rapid eye movement sleep (NREM) and REM in rats was developed using a support vector machine (SVM). We then assessed the utility of the automated scoring system in discriminating wakefulness and sleep by comparing the results to standard scoring of wakefulness and sleep based on concurrently recorded EEG and EMG. Agreement between SVM automated scoring based on selected features and visual scores based on EEG and EMG were approximately 91% for wakefulness, 84% for NREM and 70% for REM. The results indicate that automated scoring based on non-invasively acquired movement and respiratory activity will be useful for studies requiring discrimination of wakefulness and sleep. However, additional information or signals will be needed to improve discrimination of NREM and REM episodes within sleep. PMID:22178621

  6. Imaging retinal degeneration in mice by combining Fourier domain optical coherence tomography and fluorescent scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Hossein-Javaheri, Nima; Molday, Laurie L.; Xu, Jing; Molday, Robert S.; Sarunic, Marinko V.

    2009-02-01

    Visualization of the internal structures of the retina is critical for clinical diagnosis and monitoring of pathology as well as for medical research investigating the root causes of retinal degeneration. Optical Coherence Tomography (OCT) is emerging as the preferred technique for non-contact sub-surface depth-resolved imaging of the retina. The high resolution cross sectional images acquired in vivo by OCT can be compared to histology to visually delineate the retinal layers. The recent demonstration of the significant sensitivity increase obtained through use of Fourier domain (FD) detection with OCT has been used to facilitate high speed scanning for volumetric reconstruction of the retina in software. The images acquired by OCT are purely structural, relying on refractive index differences in the tissue for contrast, and do not provide information on the molecular content of the sample. We have constructed a FDOCT prototype and combined it with a fluorescent Scanning Laser Ophthalmoscope (fSLO) to permit real time alignment of the field of view on the retina. The alignment of the FDOCT system to the specimen is crucial for the registration of measurements taken throughout longitudinal studies. In addition, fluorescence detection has been integrated with the SLO to enable the en face localization of a molecular contrast signal, which is important for retinal angiography, and also for detection of autofluorescence associated with some forms of retinal degeneration, for example autofluorescence lipofuscin accumulations are associated with Stargardt's Macular Dystrophy. The integrated FD OCT/fSLO system was investigated for imaging the retina of the mice in vivo.

  7. Influence of image registration on ADC images computed from free-breathing diffusion MRIs of the abdomen

    NASA Astrophysics Data System (ADS)

    Guyader, Jean-Marie; Bernardin, Livia; Douglas, Naomi H. M.; Poot, Dirk H. J.; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    The apparent diffusion coefficient (ADC) is an imaging biomarker providing quantitative information on the diffusion of water in biological tissues. This measurement could be of relevance in oncology drug development, but it suffers from a lack of reliability. ADC images are computed by applying a voxelwise exponential fitting to multiple diffusion-weighted MR images (DW-MRIs) acquired with different diffusion gradients. In the abdomen, respiratory motion induces misalignments in the datasets, creating visible artefacts and inducing errors in the ADC maps. We propose a multistep post-acquisition motion compensation pipeline based on 3D non-rigid registrations. It corrects for motion within each image and brings all DW-MRIs to a common image space. The method is evaluated on 10 datasets of free-breathing abdominal DW-MRIs acquired from healthy volunteers. Regions of interest (ROIs) are segmented in the right part of the abdomen and measurements are compared in the three following cases: no image processing, Gaussian blurring of the raw DW-MRIs and registration. Results show that both blurring and registration improve the visual quality of ADC images, but compared to blurring, registration yields visually sharper images. Measurement uncertainty is reduced both by registration and blurring. For homogeneous ROIs, blurring and registration result in similar median ADCs, which are lower than without processing. In a ROI at the interface between liver and kidney, registration and blurring yield different median ADCs, suggesting that uncorrected motion introduces a bias. Our work indicates that averaging procedures on the scanner should be avoided, as they remove the opportunity to perform motion correction.

  8. Simulation of the hyperspectral data from multispectral data using Python programming language

    NASA Astrophysics Data System (ADS)

    Tiwari, Varun; Kumar, Vinay; Pandey, Kamal; Ranade, Rigved; Agarwal, Shefali

    2016-04-01

    Multispectral remote sensing (MRS) sensors have proved their potential in acquiring and retrieving information of Land Use Land (LULC) Cover features in the past few decades. These MRS sensor generally acquire data within limited broad spectral bands i.e. ranging from 3 to 10 number of bands. The limited number of bands and broad spectral bandwidth in MRS sensors becomes a limitation in detailed LULC studies as it is not capable of distinguishing spectrally similar LULC features. On the counterpart, fascinating detailed information available in hyperspectral (HRS) data is spectrally over determined and able to distinguish spectrally similar material of the earth surface. But presently the availability of HRS sensors is limited. This is because of the requirement of sensitive detectors and large storage capability, which makes the acquisition and processing cumbersome and exorbitant. So, there arises a need to utilize the available MRS data for detailed LULC studies. Spectral reconstruction approach is one of the technique used for simulating hyperspectral data from available multispectral data. In the present study, spectral reconstruction approach is utilized for the simulation of hyperspectral data using EO-1 ALI multispectral data. The technique is implemented using python programming language which is open source in nature and possess support for advanced imaging processing libraries and utilities. Over all 70 bands have been simulated and validated using visual interpretation, statistical and classification approach.

  9. Landsat 8 Multispectral and Pansharpened Imagery Processing on the Study of Civil Engineering Issues

    NASA Astrophysics Data System (ADS)

    Lazaridou, M. A.; Karagianni, A. Ch.

    2016-06-01

    Scientific and professional interests of civil engineering mainly include structures, hydraulics, geotechnical engineering, environment, and transportation issues. Topics included in the context of the above may concern urban environment issues, urban planning, hydrological modelling, study of hazards and road construction. Land cover information contributes significantly on the study of the above subjects. Land cover information can be acquired effectively by visual image interpretation of satellite imagery or after applying enhancement routines and also by imagery classification. The Landsat Data Continuity Mission (LDCM - Landsat 8) is the latest satellite in Landsat series, launched in February 2013. Landsat 8 medium spatial resolution multispectral imagery presents particular interest in extracting land cover, because of the fine spectral resolution, the radiometric quantization of 12bits, the capability of merging the high resolution panchromatic band of 15 meters with multispectral imagery of 30 meters as well as the policy of free data. In this paper, Landsat 8 multispectral and panchromatic imageries are being used, concerning surroundings of a lake in north-western Greece. Land cover information is extracted, using suitable digital image processing software. The rich spectral context of the multispectral image is combined with the high spatial resolution of the panchromatic image, applying image fusion - pansharpening, facilitating in this way visual image interpretation to delineate land cover. Further processing concerns supervised image classification. The classification of pansharpened image preceded multispectral image classification. Corresponding comparative considerations are also presented.

  10. Dynamic Reconfiguration of the Supplementary Motor Area Network during Imagined Music Performance

    PubMed Central

    Tanaka, Shoji; Kirino, Eiji

    2017-01-01

    The supplementary motor area (SMA) has been shown to be the center for motor planning and is active during music listening and performance. However, limited data exist on the role of the SMA in music. Music performance requires complex information processing in auditory, visual, spatial, emotional, and motor domains, and this information is integrated for the performance. We hypothesized that the SMA is engaged in multimodal integration of information, distributed across several regions of the brain to prepare for ongoing music performance. To test this hypothesis, functional networks involving the SMA were extracted from functional magnetic resonance imaging (fMRI) data that were acquired from musicians during imagined music performance and during the resting state. Compared with the resting condition, imagined music performance increased connectivity of the SMA with widespread regions in the brain including the sensorimotor cortices, parietal cortex, posterior temporal cortex, occipital cortex, and inferior and dorsolateral prefrontal cortex. Increased connectivity of the SMA with the dorsolateral prefrontal cortex suggests that the SMA is under cognitive control, while increased connectivity with the inferior prefrontal cortex suggests the involvement of syntax processing. Increased connectivity with the parietal cortex, posterior temporal cortex, and occipital cortex is likely for the integration of spatial, emotional, and visual information. Finally, increased connectivity with the sensorimotor cortices was potentially involved with the translation of thought planning into motor programs. Therefore, the reconfiguration of the SMA network observed in this study is considered to reflect the multimodal integration required for imagined and actual music performance. We propose that the SMA network construct “the internal representation of music performance” by integrating multimodal information required for the performance. PMID:29311870

  11. Community-Acquired Pneumonia Visualized on CT Scans but Not Chest Radiographs: Pathogens, Severity, and Clinical Outcomes.

    PubMed

    Upchurch, Cameron P; Grijalva, Carlos G; Wunderink, Richard G; Williams, Derek J; Waterer, Grant W; Anderson, Evan J; Zhu, Yuwei; Hart, Eric M; Carroll, Frank; Bramley, Anna M; Jain, Seema; Edwards, Kathryn M; Self, Wesley H

    2018-03-01

    The clinical significance of pneumonia visualized on CT scan in the setting of a normal chest radiograph is uncertain. In a multicenter prospective surveillance study of adults hospitalized with community-acquired pneumonia (CAP), we compared the presenting clinical features, pathogens present, and outcomes of patients with pneumonia visualized on a CT scan but not on a concurrent chest radiograph (CT-only pneumonia) and those with pneumonia visualized on a chest radiograph. All patients underwent chest radiography; the decision to obtain CT imaging was determined by the treating clinicians. Chest radiographs and CT images were interpreted by study-dedicated thoracic radiologists blinded to the clinical data. The study population included 2,251 adults with CAP; 2,185 patients (97%) had pneumonia visualized on chest radiography, whereas 66 patients (3%) had pneumonia visualized on CT scan but not on concurrent chest radiography. Overall, these patients with CT-only pneumonia had a clinical profile similar to those with pneumonia visualized on chest radiography, including comorbidities, vital signs, hospital length of stay, prevalence of viral (30% vs 26%) and bacterial (12% vs 14%) pathogens, ICU admission (23% vs 21%), use of mechanical ventilation (6% vs 5%), septic shock (5% vs 4%), and inhospital mortality (0 vs 2%). Adults hospitalized with CAP who had radiological evidence of pneumonia on CT scan but not on concurrent chest radiograph had pathogens, disease severity, and outcomes similar to patients who had signs of pneumonia on chest radiography. These findings support using the same management principles for patients with CT-only pneumonia and those with pneumonia seen on chest radiography. Copyright © 2017 American College of Chest Physicians. All rights reserved.

  12. Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging.

    PubMed

    Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam

    2017-10-01

    A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.

  13. Real-Time View Correction for Mobile Devices.

    PubMed

    Schops, Thomas; Oswald, Martin R; Speciale, Pablo; Yang, Shuoran; Pollefeys, Marc

    2017-11-01

    We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the location of strong image gradients into account as likely depth discontinuities. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. Our method has use cases in both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video.

  14. Extracting three-dimensional orientation and tractography of myofibers using optical coherence tomography

    PubMed Central

    Gan, Yu; Fleming, Christine P.

    2013-01-01

    Abnormal changes in orientation of myofibers are associated with various cardiac diseases such as arrhythmia, irregular contraction, and cardiomyopathy. To extract fiber information, we present a method of quantifying fiber orientation and reconstructing three-dimensional tractography of myofibers using optical coherence tomography (OCT). A gradient based algorithm was developed to quantify fiber orientation in three dimensions and particle filtering technique was employed to track myofibers. Prior to image processing, three-dimensional image data set were acquired from all cardiac chambers and ventricular septum of swine hearts using OCT system without optical clearing. The algorithm was validated through rotation test and comparison with manual measurements. The experimental results demonstrate that we are able to visualize three-dimensional fiber tractography in myocardium tissues. PMID:24156071

  15. Interrelations in the Development of Primary School Learners' Creative Imagination and Creative Activity When Depicting a Portrait in Visual Art Lessons

    ERIC Educational Resources Information Center

    Šlahova, Aleksandra; Volonte, Ilze; Cacka, Maris

    2017-01-01

    Creative imagination is a psychic process of creating a new original image, idea or art work based on the acquired knowledge, skills, and abilities as well as on the experience of creative activity. The best of all primary school learners' creative imagination develops at the lessons of visual art, aimed at teaching them to understand what is…

  16. Acquired color vision and visual field defects in patients with ocular hypertension and early glaucoma.

    PubMed

    Papaconstantinou, Dimitris; Georgalas, Ilias; Kalantzis, George; Karmiris, Efthimios; Koutsandrea, Chrysanthi; Diagourtas, Andreas; Ladas, Ioannis; Georgopoulos, Gerasimos

    2009-01-01

    To study acquired color vision and visual field defects in patients with ocular hypertension (OH) and early glaucoma. In a prospective study we evaluated 99 eyes of 56 patients with OH without visual field defects and no hereditary color deficiencies, followed up for 4 to 6 years (mean = 4.7 +/- 0.6 years). Color vision defects were studied using a special computer program for Farnsworth-Munsell 100 hue test and visual field tests were performed with Humphrey analyzer using program 30-2. Both tests were repeated every six months. In fifty-six eyes, glaucomatous defects were observed during the follow-up period. There was a statistically significant difference in total error score (TES) between eyes that eventually developed glaucoma (157.89 +/- 31.79) and OH eyes (75.51 +/- 31.57) at the first examination (t value 12.816, p < 0.001). At the same time visual field indices were within normal limits in both groups. In the glaucomatous eyes the earliest statistical significant change in TES was identified at the first year of follow-up and was -20.62 +/- 2.75 (t value 9.08, p < 0.001) while in OH eyes was -2.11 +/- 4.36 (t value 1.1, p = 0.276). Pearson's coefficient was high in all examinations and showed a direct correlation between TES and mean deviation and corrected pattern standard deviation in both groups. Quantitative analysis of color vision defects provides the possibility of follow-up and can prove a useful means for detecting early glaucomatous changes in patients with normal visual fields.

  17. Torsional ARC Effectively Expands the Visual Field in Hemianopia

    PubMed Central

    Satgunam, PremNandhini; Peli, Eli

    2012-01-01

    Purpose Exotropia in congenital homonymous hemianopia has been reported to provide field expansion that is more useful when accompanied with harmonios anomalous retinal correspondence (HARC). Torsional strabismus with HARC provides a similar functional advantage. In a subject with hemianopia demonstrating a field expansion consistent with torsion we documented torsional strabismus and torsional HARC. Methods Monocular visual fields under binocular fixation conditions were plotted using a custom dichoptic visual field perimeter (DVF). The DVF was also modified to measure perceived visual directions under dissociated and associated conditions across the central 50° diameter field. The field expansion and retinal correspondence of a subject with torsional strabismus (along with exotropia and right hypertropia) with congenital homonymous hemianopia was compared to that of another exotropic subject with acquired homonymous hemianopia without torsion and to a control subject with minimal phoria. Torsional rotations of the eyes were calculated from fundus photographs and perimetry. Results Torsional ARC documented in the subject with congenital homonymous hemianopia provided a functional binocular field expansion up to 18°. Normal retinal correspondence was mapped for the full 50° visual field in the control subject and for the seeing field of the acquired homonymous hemianopia subject, limiting the functional field expansion benefit. Conclusions Torsional strabismus with ARC, when occurring with homonymous hemianopia provides useful field expansion in the lower and upper fields. Dichoptic perimetry permits documentation of ocular alignment (lateral, vertical and torsional) and perceived visual direction under binocular and monocular viewing conditions. Evaluating patients with congenital or early strabismus for HARC is useful when considering surgical correction, particularly in the presence of congenital homonymous hemianopia. PMID:22885782

  18. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  19. Relationship between Internet use and general belief in a just world among Chinese retirees.

    PubMed

    Zhang, Zhen; Zhang, Jianxin; Zhu, Tingshao

    2013-07-01

    As an emerging medium for acquiring information, the Internet might affect how users, including older adults, view or think about the world around them. Using data from a survey of retirees aged 50 years and above (N=12,309) in China, the present study examined the relationship between Internet use for acquiring information about the world and general belief in a just world (GBJW). The results indicated that Internet use primarily for obtaining news information was negatively related to GBJW. Specifically, Internet users had lower levels of GBJW than nonusers; the more time retirees spent visiting Web sites to acquire news information, the less likely they were to believe that the world is just. In addition, compared with retirees who had acquired information about the world through other means (including books, newspapers or magazines, radio and television, and direct communication with other people), those who had acquired information primarily using the Internet showed lower levels of GBJW. The significance and limitations of the current study are discussed.

  20. Making memories: the development of long-term visual knowledge in children with visual agnosia.

    PubMed

    Metitieri, Tiziana; Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo

    2013-01-01

    There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2  years and 3.7  years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment.

  1. Making Memories: The Development of Long-Term Visual Knowledge in Children with Visual Agnosia

    PubMed Central

    Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo

    2013-01-01

    There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment. PMID:24319599

  2. Cognitive task analysis of network analysts and managers for network situational awareness

    NASA Astrophysics Data System (ADS)

    Erbacher, Robert F.; Frincke, Deborah A.; Wong, Pak Chung; Moody, Sarah; Fink, Glenn

    2010-01-01

    The goal of our project is to create a set of next-generation cyber situational-awareness capabilities with applications to other domains in the long term. The situational-awareness capabilities being developed focus on novel visualization techniques as well as data analysis techniques designed to improve the comprehensibility of the visualizations. The objective is to improve the decision-making process to enable decision makers to choose better actions. To this end, we put extensive effort into ensuring we had feedback from network analysts and managers and understanding what their needs truly are. This paper discusses the cognitive task analysis methodology we followed to acquire feedback from the analysts. This paper also provides the details we acquired from the analysts on their processes, goals, concerns, etc. A final result we describe is the generation of a task-flow diagram.

  3. [Mucolipidoses type IV in a patient with mapuche ancestry].

    PubMed

    Hernández Ch, Marta; Méndez C, José Ignacio; Concha G, María José; Huete L, Isidro; González B, Sergio; Durán S, Gloria P

    2008-07-01

    We report a 7 year-old girl with mapuche ancestors, diagnosed as a cerebral palsy since infancy and on active rehabilitation. She acquired motor and cognitive skills at 3 years of age. At 5 years of age, a slow neurological deterioration started, associated to visual impairment. Optic atrophy was added to the typical neurological exam of ataxic cerebral palsy and the diagnosis was re-considered. Neuroimaging showed a slow and progressive atrophy of intracerebral structures and ultramicroscopy revealed intracytoplasmatic inclusions in conjunctiva and skin, compatible with mucolipidoses type IV (ML-IV). ML-IV must be included in the differential diagnosis of cerebral palsy associated with loss of acquired skills and progressive visual impairment. Electron microscopy of skin or conjunctiva is a useful diagnostic test. Suspicion of ML-IV must not be restricted to Ashkenazi Jewish population.

  4. Retinotopic mapping with Spin Echo BOLD at 7 Tesla

    PubMed Central

    Olman, Cheryl A.; Van de Moortele, Pierre-Francois; Schumacher, Jennifer F.; Guy, Joe; Uğurbil, Kâmil; Yacoub, Essa

    2010-01-01

    For blood oxygenation level-dependent (BOLD) functional MRI experiments, contrast-to-noise ratio (CNR) increases with increasing field strength for both gradient echo (GE) and spin echo (SE) BOLD techniques. However, susceptibility artifacts and non-uniform coil sensitivity profiles complicate large field-of-view fMRI experiments (e.g., experiments covering multiple visual areas instead of focusing on a single cortical region). Here, we use SE BOLD to acquire retinotopic mapping data in early visual areas, testing the feasibility of SE BOLD experiments spanning multiple cortical areas at 7 Tesla. We also use a recently developed method for normalizing signal intensity in T1-weighted anatomical images to enable automated segmentation of the cortical gray matter for scans acquired at 7T with either surface or volume coils. We find that the CNR of the 7T GE data (average single-voxel, single-scan stimulus coherence: 0.41) is almost twice that of the 3T GE BOLD data (average coherence: 0.25), with the CNR of the SE BOLD data (average coherence: 0.23) comparable to that of the 3T GE data. Repeated measurements in individual subjects find that maps acquired with 1.8 mm resolution at 3T and 7T with GE BOLD and at 7T with SE BOLD show no systematic differences in either the area or the boundary locations for V1, V2 and V3, demonstrating the feasibility of high-resolution SE BOLD experiments with good sensitivity throughout multiple visual areas. PMID:20656431

  5. Digital management and regulatory submission of medical images from clinical trials: role and benefits of the core laboratory

    NASA Astrophysics Data System (ADS)

    Robbins, William L.; Conklin, James J.

    1995-10-01

    Medical images (angiography, CT, MRI, nuclear medicine, ultrasound, x ray) play an increasingly important role in the clinical development and regulatory review process for pharmaceuticals and medical devices. Since medical images are increasingly acquired and archived digitally, or are readily digitized from film, they can be visualized, processed and analyzed in a variety of ways using digital image processing and display technology. Moreover, with image-based data management and data visualization tools, medical images can be electronically organized and submitted to the U.S. Food and Drug Administration (FDA) for review. The collection, processing, analysis, archival, and submission of medical images in a digital format versus an analog (film-based) format presents both challenges and opportunities for the clinical and regulatory information management specialist. The medical imaging 'core laboratory' is an important resource for clinical trials and regulatory submissions involving medical imaging data. Use of digital imaging technology within a core laboratory can increase efficiency and decrease overall costs in the image data management and regulatory review process.

  6. [Virtual otoscopy--technique, indications and initial experiences with multislice spiral CT].

    PubMed

    Klingebiel, R; Bauknecht, H C; Lehmann, R; Rogalla, P; Werbs, M; Behrbohm, H; Kaschke, O

    2000-11-01

    We report the standardized postprocessing of high-resolution CT data acquired by incremental CT and multi-slice CT in patients with suspected middle ear disorders to generate three-dimensional endoluminal views known as virtual otoscopy. Subsequent to the definition of a postprocessing protocol, standardized endoluminal views of the middle ear were generated according to their otological relevance. The HRCT data sets of 26 ENT patients were transferred to a workstation and postprocessed to 52 virtual otoscopies. Generation of predefined endoluminal views from the HRCT data sets was possible in all patients. Virtual endoscopic views added meaningful information to the primary cross-sectional data in patients suffering from ossicular pathology, having contraindications for invasive tympanic endoscopy or being assessed for surgery of the tympanic cavity. Multi slice CT improved the visualization of subtle anatomic details such as the stapes suprastructure and reduced the scanning time. Virtual endoscopy allows for the non invasive endoluminal visualization of various tympanic lesions. Use of the multi-slice CT technique reduces the scanning time and improves image quality in terms of detail resolution.

  7. Handheld real-time volumetric 3-D gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Haefner, Andrew; Barnowski, Ross; Luke, Paul; Amman, Mark; Vetter, Kai

    2017-06-01

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  8. Analysis of haptic information in the cerebral cortex

    PubMed Central

    2016-01-01

    Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level. PMID:27440247

  9. People-oriented Information Visualization Design

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyong; Zhang, Bolun

    2018-04-01

    In the 21st century with rapid development, in the wake of the continuous progress of science and technology, human society enters the information era and the era of big data, and the lifestyle and aesthetic system also change accordingly, so the emerging field of information visualization is increasingly popular. Information visualization design is the process of visualizing all kinds of tedious information data, so as to quickly accept information and save time-cost. Along with the development of the process of information visualization, information design, also becomes hotter and hotter, and emotional design, people-oriented design is an indispensable part of in the design of information. This paper probes information visualization design through emotional analysis of information design based on the social context of people-oriented experience from the perspective of art design. Based on the three levels of emotional information design: instinct level, behavior level and reflective level research, to explore and discuss information visualization design.

  10. Simulation of EO-1 Hyperion Data from ALI Multispectral Data Based on the Spectral Reconstruction Approach

    PubMed Central

    Liu, Bo; Zhang, Lifu; Zhang, Xia; Zhang, Bing; Tong, Qingxi

    2009-01-01

    Data simulation is widely used in remote sensing to produce imagery for a new sensor in the design stage, for scale issues of some special applications, or for testing of novel algorithms. Hyperspectral data could provide more abundant information than traditional multispectral data and thus greatly extend the range of remote sensing applications. Unfortunately, hyperspectral data are much more difficult and expensive to acquire and were not available prior to the development of operational hyperspectral instruments, while large amounts of accumulated multispectral data have been collected around the world over the past several decades. Therefore, it is reasonable to examine means of using these multispectral data to simulate or construct hyperspectral data, especially in situations where hyperspectral data are necessary but hard to acquire. Here, a method based on spectral reconstruction is proposed to simulate hyperspectral data (Hyperion data) from multispectral Advanced Land Imager data (ALI data). This method involves extraction of the inherent information of source data and reassignment to newly simulated data. A total of 106 bands of Hyperion data were simulated from ALI data covering the same area. To evaluate this method, we compare the simulated and original Hyperion data by visual interpretation, statistical comparison, and classification. The results generally showed good performance of this method and indicated that most bands were well simulated, and the information both preserved and presented well. This makes it possible to simulate hyperspectral data from multispectral data for testing the performance of algorithms, extend the use of multispectral data and help the design of a virtual sensor. PMID:22574064

  11. The contribution of visual and vestibular information to spatial orientation by 6- to 14-month-old infants and adults.

    PubMed

    Bremner, J Gavin; Hatton, Fran; Foster, Kirsty A; Mason, Uschi

    2011-09-01

    Although there is much research on infants' ability to orient in space, little is known regarding the information they use to do so. This research uses a rotating room to evaluate the relative contribution of visual and vestibular information to location of a target following bodily rotation. Adults responded precisely on the basis of visual flow information. Seven-month-olds responded mostly on the basis of visual flow, whereas 9-month-olds responded mostly on the basis of vestibular information, and 12-month-olds responded mostly on the basis of visual information. Unlike adults, infants of all ages showed partial influence by both modalities. Additionally, 7-month-olds were capable of using vestibular information when there was no visual information for movement or stability, and 9-month-olds still relied on vestibular information when visual information was enhanced. These results are discussed in the context of neuroscientific evidence regarding visual-vestibular interaction, and in relation to possible changes in reliance on visual and vestibular information following acquisition of locomotion. © 2011 Blackwell Publishing Ltd.

  12. Investigations of the pathogenesis of acquired pendular nystagmus

    NASA Technical Reports Server (NTRS)

    Averbuch-Heller, L.; Zivotofsky, A. Z.; Das, V. E.; DiScenna, A. O.; Leigh, R. J.

    1995-01-01

    We investigated the pathogenesis of acquired pendular nystagmus (APN) in six patients, three of whom had multiple sclerosis. First, we tested the hypothesis that the oscillations of APN are due to a delay in visual feedback secondary, for example, to demyelination of the optic nerves. We manipulated the latency to onset of visually guided eye movements using an electronic technique that induces sinusoidal oscillations in normal subjects. This manipulation did not change the characteristics of the APN, but did superimpose lower-frequency oscillations similar to those induced in normal subjects. These results are consistent with current models for smooth (non-saccadic) eye movements, which predict that prolongation of visual feedback could not account for the high-frequency oscillations that often characterize APN. Secondly, we attempted to determine whether an increase in the gain of the visually-enhanced vestibulo-ocular reflex (VOR), produced by viewing a near target, was accompanied by a commensurate increase in the amplitude of APN. Increases in horizontal or vertical VOR gain during near viewing occurred in four patients, but only two of them showed a parallel increase in APN amplitude. On the other hand, APN amplitude decreased during viewing of the near target in the two patients who showed no change in VOR gain. Taken together, these data suggest that neither delayed visual feedback nor a disorder of central vestibular mechanisms is primarily responsible for APN. More likely, these ocular oscillations are produced by abnormalities of internal feedback circuits, such as the reciprocal connections between brainstem nuclei and cerebellum.

  13. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement.

    PubMed

    Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.

  14. The influence of selective attention to auditory and visual speech on the integration of audiovisual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2011-01-01

    Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.

  15. Detection of Brain Reorganization in Pediatric Multiple Sclerosis Using Functional MRI

    DTIC Science & Technology

    2015-10-01

    accomplish this, we apply comparative assessments of fMRI mappings of language, memory , and motor function, and performance on clinical neurocognitive...community at a target rate of 13 volunteers per quarter period; acquire fMRI data for language, memory , and visual-motor functions (months 3-12). c...consensus fMRI activation maps for language, memory , and visual-motor tasks (months 8-12). f) Subtask 1f. Prepare publication to disseminate our

  16. The climate visualizer: Sense-making through scientific visualization

    NASA Astrophysics Data System (ADS)

    Gordin, Douglas N.; Polman, Joseph L.; Pea, Roy D.

    1994-12-01

    This paper describes the design of a learning environment, called the Climate Visualizer, intended to facilitate scientific sense-making in high school classrooms by providing students the ability to craft, inspect, and annotate scientific visualizations. The theoretical back-ground for our design presents a view of learning as acquiring and critiquing cultural practices and stresses the need for students to appropriate the social and material aspects of practice when learning an area. This is followed by a description of the design of the Climate Visualizer, including detailed accounts of its provision of spatial and temporal context and the quantitative and visual representations it employs. A broader context is then explored by describing its integration into the high school science classroom. This discussion explores how visualizations can promote the creation of scientific theories, especially in conjunction with the Collaboratory Notebook, an embedded environment for creating and critiquing scientific theories and visualizations. Finally, we discuss the design trade-offs we have made in light of our theoretical orientation, and our hopes for further progress.

  17. The essence of student visual-spatial literacy and higher order thinking skills in undergraduate biology.

    PubMed

    Milner-Bolotin, Marina; Nashon, Samson Madera

    2012-02-01

    Science, engineering and mathematics-related disciplines have relied heavily on a researcher's ability to visualize phenomena under study and being able to link and superimpose various abstract and concrete representations including visual, spatial, and temporal. The spatial representations are especially important in all branches of biology (in developmental biology time becomes an important dimension), where 3D and often 4D representations are crucial for understanding the phenomena. By the time biology students get to undergraduate education, they are supposed to have acquired visual-spatial thinking skills, yet it has been documented that very few undergraduates and a small percentage of graduate students have had a chance to develop these skills to a sufficient degree. The current paper discusses the literature that highlights the essence of visual-spatial thinking and the development of visual-spatial literacy, considers the application of the visual-spatial thinking to biology education, and proposes how modern technology can help to promote visual-spatial literacy and higher order thinking among undergraduate students of biology.

  18. Soft Pushing Operation with Dual Compliance Controllers Based on Estimated Torque and Visual Force

    NASA Astrophysics Data System (ADS)

    Muis, Abdul; Ohnishi, Kouhei

    Sensor fusion extends robot ability to perform more complex tasks. An interesting application in such an issue is pushing operation, in which through multi-sensor, the robot moves an object by pushing it. Generally, a pushing operation consists of “approaching, touching, and pushing"(1). However, most researches in this field are dealing with how the pushed object follows the predefined trajectory. In which, the implication as the robot body or the tool-tip hits an object is neglected. Obviously on collision, the robot momentum may crash sensor, robot's surface or even the object. For that reason, this paper proposes a soft pushing operation with dual compliance controllers. Mainly, a compliance control is a control system with trajectory compensation so that the external force may be followed. In this paper, the first compliance controller is driven by estimated external force based on reaction torque observer(2), which compensates contact sensation. The other one compensates non-contact sensation. Obviously, a contact sensation, acquired from force sensor either reaction torque observer of an object, is measurable once the robot touched the object. Therefore, a non-contact sensation is introduced before touching an object, which is realized with visual sensor in this paper. Here, instead of using visual information as command reference, the visual information such as depth, is treated as virtual force for the second compliance controller. Thus, having contact and non-contact sensation, the robot will be compliant with wider sensation. This paper considers a heavy mobile manipulator and a heavy object, which have significant momentum on touching stage. A chopstick is attached on the object side to show the effectiveness of the proposed method. Here, both compliance controllers adjust the mobile manipulator command reference to provide soft pushing operation. Finally, the experimental result shows the validity of the proposed method.

  19. Comparison of Physician-Predicted to Measured Low Vision Outcomes

    PubMed Central

    Chan, Tiffany L.; Goldstein, Judith E.; Massof, Robert W.

    2013-01-01

    Purpose To compare low vision rehabilitation (LVR) physicians’ predictions of the probability of success of LVR to patients’ self-reported outcomes after provision of usual outpatient LVR services; and to determine if patients’ traits influence physician ratings. Methods The Activity Inventory (AI), a self-report visual function questionnaire, was administered pre and post-LVR to 316 low vision patients served by 28 LVR centers that participated in a collaborative observational study. The physical component of the Short Form-36, Geriatric Depression Scale, and Telephone Interview for Cognitive Status were also administered pre-LVR to measure physical capability, depression and cognitive status. Following patient evaluation, 38 LVR physicians estimated the probability of outcome success (POS), using their own criteria. The POS ratings and change in functional ability were used to assess the effects of patients’ baseline traits on predicted outcomes. Results A regression analysis with a hierarchical random effects model showed no relationship between LVR physician POS estimates and AI-based outcomes. In another analysis, Kappa statistics were calculated to determine the probability of agreement between POS and AI-based outcomes for different outcome criteria. Across all comparisons, none of the kappa values were significantly different from 0, which indicates the rate of agreement is equivalent to chance. In an exploratory analysis, hierarchical mixed effects regression models show that POS ratings are associated with information about the patient’s cognitive functioning and the combination of visual acuity and functional ability, as opposed to visual acuity or functional ability alone. Conclusions Physicians’ predictions of LVR outcomes appear to be influenced by knowledge of patients’ cognitive functioning and the combination of visual acuity and functional ability - information physicians acquire from the patient’s history and examination. However, physicians’ predictions do not agree with observed changes in functional ability from the patient’s perspective; they are no better than chance. PMID:23873036

  20. The View from the Trees: Nocturnal Bull Ants, Myrmecia midas, Use the Surrounding Panorama While Descending from Trees

    PubMed Central

    Freas, Cody A.; Wystrach, Antione; Narendra, Ajay; Cheng, Ken

    2018-01-01

    Solitary foraging ants commonly use visual cues from their environment for navigation. Foragers are known to store visual scenes from the surrounding panorama for later guidance to known resources and to return successfully back to the nest. Several ant species travel not only on the ground, but also climb trees to locate resources. The navigational information that guides animals back home during their descent, while their body is perpendicular to the ground, is largely unknown. Here, we investigate in a nocturnal ant, Myrmecia midas, whether foragers travelling down a tree use visual information to return home. These ants establish nests at the base of a tree on which they forage and in addition, they also forage on nearby trees. We collected foragers and placed them on the trunk of the nest tree or a foraging tree in multiple compass directions. Regardless of the displacement location, upon release ants immediately moved to the side of the trunk facing the nest during their descent. When ants were released on non-foraging trees near the nest, displaced foragers again travelled around the tree to the side facing the nest. All the displaced foragers reached the correct side of the tree well before reaching the ground. However, when the terrestrial cues around the tree were blocked, foragers were unable to orient correctly, suggesting that the surrounding panorama is critical to successful orientation on the tree. Through analysis of panoramic pictures, we show that views acquired at the base of the foraging tree nest can provide reliable nest-ward orientation up to 1.75 m above the ground. We discuss, how animals descending from trees compare their current scene to a memorised scene and report on the similarities in visually guided behaviour while navigating on the ground and descending from trees. PMID:29422880

  1. The View from the Trees: Nocturnal Bull Ants, Myrmecia midas, Use the Surrounding Panorama While Descending from Trees.

    PubMed

    Freas, Cody A; Wystrach, Antione; Narendra, Ajay; Cheng, Ken

    2018-01-01

    Solitary foraging ants commonly use visual cues from their environment for navigation. Foragers are known to store visual scenes from the surrounding panorama for later guidance to known resources and to return successfully back to the nest. Several ant species travel not only on the ground, but also climb trees to locate resources. The navigational information that guides animals back home during their descent, while their body is perpendicular to the ground, is largely unknown. Here, we investigate in a nocturnal ant, Myrmecia midas , whether foragers travelling down a tree use visual information to return home. These ants establish nests at the base of a tree on which they forage and in addition, they also forage on nearby trees. We collected foragers and placed them on the trunk of the nest tree or a foraging tree in multiple compass directions. Regardless of the displacement location, upon release ants immediately moved to the side of the trunk facing the nest during their descent. When ants were released on non-foraging trees near the nest, displaced foragers again travelled around the tree to the side facing the nest. All the displaced foragers reached the correct side of the tree well before reaching the ground. However, when the terrestrial cues around the tree were blocked, foragers were unable to orient correctly, suggesting that the surrounding panorama is critical to successful orientation on the tree. Through analysis of panoramic pictures, we show that views acquired at the base of the foraging tree nest can provide reliable nest-ward orientation up to 1.75 m above the ground. We discuss, how animals descending from trees compare their current scene to a memorised scene and report on the similarities in visually guided behaviour while navigating on the ground and descending from trees.

  2. Social capital, social relationships and adults with acquired visual impairment: a nigerian perspective.

    PubMed

    Bassey, Emmanuel; Ellison, Caroline; Walker, Ruth

    2018-01-31

    This study investigates the social capital implications of vision loss among working-age adults in Nigeria. The study explores the challenges of acquiring and maintaining social relationships post-vision loss, and investigates the extent to which visual rehabilitation services support social goals. A qualitative study using a phenomenological approach was undertaken. Eight adults (18-59 years) were recruited from disability service organizations in Nigeria. Telephone interviews were recorded and transcribed, and thematic content analysis was used to analyze the data gathered in this study. Three broad themes were developed from participants' accounts of their experiences: (1) changes to relationships with friends and others; (2) finding strength in family relationships; and (3) rehabilitation and the confidence to interact. The findings indicate that the relationship between participants and their family members improved post vision impairment, enhancing bonding social capital. However, participants experienced reduced bridging and linking social capital due to diminished or broken relationships with managers, coworkers, friends, and others in the community. As social connectedness and relationships are highly valued in Nigeria's diverse society, we suggest that adults with visual impairment would significantly benefit from visual rehabilitation services placing greater emphasis on addressing the social goals of participants. Implications for Rehabilitation Visual impairment in working-age adults can strengthen family relationships (homogenous groups), creating bonding capital that is associated with access to important resources including emotional and moral support, and some financial and material resources. Visual impairment can negatively impact relationships with managers, coworkers, and others in the community (heterogeneous groups), resulting in diminished bridging and linking capital. Visual impairment can reduce access to resources such as an income, social status, and reduces participation in the wider community. Visual Rehabilitation Services could significantly benefit participants by placing greater emphasis on social goals, such as building and maintaining social networks, particularly with diverse (heterogeneous groups), which are valued in Nigeria's diverse cultural climate.

  3. Toxic risks and nutritional benefits of traditional diet on near visual contrast sensitivity and color vision in the Brazilian Amazon.

    PubMed

    Fillion, Myriam; Lemire, Mélanie; Philibert, Aline; Frenette, Benoît; Weiler, Hope Alberta; Deguire, Jason Robert; Guimarães, Jean Remy Davée; Larribe, Fabrice; Barbosa, Fernando; Mergler, Donna

    2013-07-01

    Visual functions are known to be sensitive to toxins such as mercury (Hg) and lead (Pb), while omega-3 fatty acids (FA) and selenium (Se) may be protective. In the Tapajós region of the Brazilian Amazon, all of these elements are present in the local diet. Examine how near visual contrast sensitivity and acquired color vision loss vary with biomarkers of toxic exposures (Hg and Pb) and the nutrients Se and omega-3 FA in riverside communities of the Tapajós. Complete visuo-ocular examinations were performed. Near visual contrast sensitivity and color vision were assessed in 228 participants (≥15 years) without diagnosed age-related cataracts or ocular pathologies and with near visual acuity refracted to at least 20/40. Biomarkers of Hg (hair), Pb (blood), Se (plasma), and the omega-3 FAs eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) in plasma phospholipids were measured. Multiple linear regressions were used to examine the relations between visual outcomes and biomarkers, taking into account age, sex, drinking and smoking. Reduced contrast sensitivity at all spatial frequencies was associated with hair Hg, while %EPA, and to a lesser extent %EPA+DHA, were associated with better visual function. The intermediate spatial frequency of contrast sensitivity (12 cycles/degree) was negatively related to blood Pb and positively associated with plasma Se. Acquired color vision loss increased with hair Hg and decreased with plasma Se and %EPA. These findings suggest that the local diet of riverside communities of the Amazon contain toxic substances that can have deleterious effects on vision as well as nutrients that are beneficial for visual function. Since remediation at the source is a long process, a better knowledge of the nutrient content and health effects of traditional foods would be useful to minimize harmful effects of Hg and Pb exposure. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats

    PubMed Central

    Rosselli, Federica B.; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide

    2015-01-01

    In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning. PMID:25814936

  5. Processing of proprioceptive and vestibular body signals and self-transcendence in Ashtanga yoga practitioners.

    PubMed

    Fiori, Francesca; David, Nicole; Aglioti, Salvatore M

    2014-01-01

    In the rod and frame test (RFT), participants are asked to set a tilted visual linear marker (i.e., a rod), embedded in a square, to the subjective vertical, irrespective of the surrounding frame. People not influenced by the frame tilt are defined as field-independent, while people biased in their rod verticality perception are field-dependent. Performing RFT requires the integration of proprioceptive, vestibular and visual signals with the latter accounting for field-dependency. Studies indicate that motor experts in body-related, balance-improving disciplines tend to be field-independent, i.e., better at verticality perception, suggesting that proprioceptive and vestibular expertise acquired by such exercise may weaken the influence of irrelevant visual signals. What remains unknown is whether the effect of body-related expertise in weighting perceptual information might also be mediated by personality traits, in particular those indexing self-focusing abilities. To explore this issue, we tested field-dependency in a class of body experts, namely yoga practitioners and in non-expert participants. Moreover we explored any link between performance on RFT and self-transcendence (ST), a complex personality construct, which refers to tendency to experience spiritual feelings and ideas. As expected, yoga practitioners (i) were more accurate in assessing the rod's verticality on the RFT, and (ii) expressed significantly higher ST. Interestingly, the performance in these two tests was negatively correlated. More specifically, when asked to provide verticality judgments, highly self-transcendent yoga practitioners were significantly less influenced by a misleading visual context. Our results suggest that being highly self-transcendent may enable yoga practitioners to optimize verticality judgment tasks by relying more on internal (vestibular and proprioceptive) signals coming from their own body, rather than on exteroceptive, visual cues.

  6. Eye-gaze independent EEG-based brain-computer interfaces for communication.

    PubMed

    Riccio, A; Mattia, D; Simione, L; Olivetti, M; Cincotti, F

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users' requirements in a real-life scenario.

  7. Eye-gaze independent EEG-based brain-computer interfaces for communication

    NASA Astrophysics Data System (ADS)

    Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.

  8. Processing of proprioceptive and vestibular body signals and self-transcendence in Ashtanga yoga practitioners

    PubMed Central

    Fiori, Francesca; David, Nicole; Aglioti, Salvatore M.

    2014-01-01

    In the rod and frame test (RFT), participants are asked to set a tilted visual linear marker (i.e., a rod), embedded in a square, to the subjective vertical, irrespective of the surrounding frame. People not influenced by the frame tilt are defined as field-independent, while people biased in their rod verticality perception are field-dependent. Performing RFT requires the integration of proprioceptive, vestibular and visual signals with the latter accounting for field-dependency. Studies indicate that motor experts in body-related, balance-improving disciplines tend to be field-independent, i.e., better at verticality perception, suggesting that proprioceptive and vestibular expertise acquired by such exercise may weaken the influence of irrelevant visual signals. What remains unknown is whether the effect of body-related expertise in weighting perceptual information might also be mediated by personality traits, in particular those indexing self-focusing abilities. To explore this issue, we tested field-dependency in a class of body experts, namely yoga practitioners and in non-expert participants. Moreover we explored any link between performance on RFT and self-transcendence (ST), a complex personality construct, which refers to tendency to experience spiritual feelings and ideas. As expected, yoga practitioners (i) were more accurate in assessing the rod's verticality on the RFT, and (ii) expressed significantly higher ST. Interestingly, the performance in these two tests was negatively correlated. More specifically, when asked to provide verticality judgments, highly self-transcendent yoga practitioners were significantly less influenced by a misleading visual context. Our results suggest that being highly self-transcendent may enable yoga practitioners to optimize verticality judgment tasks by relying more on internal (vestibular and proprioceptive) signals coming from their own body, rather than on exteroceptive, visual cues. PMID:25278866

  9. Relationship Between Internet Use and General Belief in a Just World Among Chinese Retirees

    PubMed Central

    Zhang, Jianxin; Zhu, Tingshao

    2013-01-01

    Abstract As an emerging medium for acquiring information, the Internet might affect how users, including older adults, view or think about the world around them. Using data from a survey of retirees aged 50 years and above (N=12,309) in China, the present study examined the relationship between Internet use for acquiring information about the world and general belief in a just world (GBJW). The results indicated that Internet use primarily for obtaining news information was negatively related to GBJW. Specifically, Internet users had lower levels of GBJW than nonusers; the more time retirees spent visiting Web sites to acquire news information, the less likely they were to believe that the world is just. In addition, compared with retirees who had acquired information about the world through other means (including books, newspapers or magazines, radio and television, and direct communication with other people), those who had acquired information primarily using the Internet showed lower levels of GBJW. The significance and limitations of the current study are discussed. PMID:23865811

  10. Long-term adaptation to change in implicit contextual learning.

    PubMed

    Zellin, Martina; von Mühlenen, Adrian; Müller, Hermann J; Conci, Markus

    2014-08-01

    The visual world consists of spatial regularities that are acquired through experience in order to guide attentional orienting. For instance, in visual search, detection of a target is faster when a layout of nontarget items is encountered repeatedly, suggesting that learned contextual associations can guide attention (contextual cuing). However, scene layouts sometimes change, requiring observers to adapt previous memory representations. Here, we investigated the long-term dynamics of contextual adaptation after a permanent change of the target location. We observed fast and reliable learning of initial context-target associations after just three repetitions. However, adaptation of acquired contextual representations to relocated targets was slow and effortful, requiring 3 days of training with overall 80 repetitions. A final test 1 week later revealed equivalent effects of contextual cuing for both target locations, and these were comparable to the effects observed on day 1. That is, observers learned both initial target locations and relocated targets, given extensive training combined with extended periods of consolidation. Thus, while implicit contextual learning efficiently extracts statistical regularities of our environment at first, it is rather insensitive to change in the longer term, especially when subtle changes in context-target associations need to be acquired.

  11. Another Function for Language and its Theoretical Consequences

    NASA Astrophysics Data System (ADS)

    Barahona da Fonseca, Isabel; Barahona da Fonseca, José; Simões da Fonseca, José

    2006-06-01

    Our proposal is that when they exercise the faculty of "parole" subjects use strategies characterized by an internal reconstruction of objects which acquire a status similar to the imperative believe in the representation of reality as it occurs in visual or auditory perception. The referent of verbal expressions acquires a greater importance for the subject who uses it according more to rhetoric principles than through logical critical analysis. Consequences concerning psychopathology, namely the phenomena of hallucination are explained on that basis.

  12. Total On-line Access Data System (TOADS): Phase II Final Report for the Period August 2002 - August 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuracko, K. L.; Parang, M.; Landguth, D. C.

    2004-09-13

    TOADS (Total On-line Access Data System) is a new generation of real-time monitoring and information management system developed to support unattended environmental monitoring and long-term stewardship of U.S. Department of Energy facilities and sites. TOADS enables project managers, regulators, and stakeholders to view environmental monitoring information in realtime over the Internet. Deployment of TOADS at government facilities and sites will reduce the cost of monitoring while increasing confidence and trust in cleanup and long term stewardship activities. TOADS: Reliably interfaces with and acquires data from a wide variety of external databases, remote systems, and sensors such as contaminant monitors, areamore » monitors, atmospheric condition monitors, visual surveillance systems, intrusion devices, motion detectors, fire/heat detection devices, and gas/vapor detectors; Provides notification and triggers alarms as appropriate; Performs QA/QC on data inputs and logs the status of instruments/devices; Provides a fully functional data management system capable of storing, analyzing, and reporting on data; Provides an easy-to-use Internet-based user interface that provides visualization of the site, data, and events; and Enables the community to monitor local environmental conditions in real time. During this Phase II STTR project, TOADS has been developed and successfully deployed for unattended facility, environmental, and radiological monitoring at a Department of Energy facility.« less

  13. Beyond the visual word form area: the orthography-semantics interface in spelling and reading.

    PubMed

    Purcell, Jeremy J; Shea, Jennifer; Rapp, Brenda

    2014-01-01

    Lexical orthographic information provides the basis for recovering the meanings of words in reading and for generating correct word spellings in writing. Research has provided evidence that an area of the left ventral temporal cortex, a subregion of what is often referred to as the visual word form area (VWFA), plays a significant role specifically in lexical orthographic processing. The current investigation goes beyond this previous work by examining the neurotopography of the interface of lexical orthography with semantics. We apply a novel lesion mapping approach with three individuals with acquired dysgraphia and dyslexia who suffered lesions to left ventral temporal cortex. To map cognitive processes to their neural substrates, this lesion mapping approach applies similar logical constraints to those used in cognitive neuropsychological research. Using this approach, this investigation: (a) identifies a region anterior to the VWFA that is important in the interface of orthographic information with semantics for reading and spelling; (b) determines that, within this orthography-semantics interface region (OSIR), access to orthography from semantics (spelling) is topographically distinct from access to semantics from orthography (reading); (c) provides evidence that, within this region, there is modality-specific access to and from lexical semantics for both spoken and written modalities, in both word production and comprehension. Overall, this study contributes to our understanding of the neural architecture at the lexical orthography-semantic-phonological interface within left ventral temporal cortex.

  14. Information technology principles for management, reporting, and research.

    PubMed

    Gillam, Michael; Rothenhaus, Todd; Smith, Vernon; Kanhouwa, Meera

    2004-11-01

    Information technology holds the promise to enhance the ability of individuals and organizations to manage emergency departments, improve data sharing and reporting, and facilitate research. The Society for Academic Emergency Medicine (SAEM) Consensus Committee has identified nine principles to outline a path of optimal features and designs for current and future information technology systems. The principles roughly summarized include the following: utilize open database standards with clear data dictionaries, provide administrative access to necessary data, appoint and recognize individuals with emergency department informatics expertise, allow automated alert and proper identification for enrollment of cases into research, provide visual and statistical tools and training to analyze data, embed automated configurable alarm functionality for clinical and nonclinical systems, allow multiexport standard and format configurable reporting, strategically acquire mission-critical equipment that is networked and capable of automated feedback regarding functional status and location, and dedicate resources toward informatics research and development. The SAEM Consensus Committee concludes that the diligent application of these principles will enhance emergency department management, reporting, and research and ultimately improve the quality of delivered health care.

  15. Mouse Tumor Biology (MTB): a database of mouse models for human cancer.

    PubMed

    Bult, Carol J; Krupke, Debra M; Begley, Dale A; Richardson, Joel E; Neuhauser, Steven B; Sundberg, John P; Eppig, Janan T

    2015-01-01

    The Mouse Tumor Biology (MTB; http://tumor.informatics.jax.org) database is a unique online compendium of mouse models for human cancer. MTB provides online access to expertly curated information on diverse mouse models for human cancer and interfaces for searching and visualizing data associated with these models. The information in MTB is designed to facilitate the selection of strains for cancer research and is a platform for mining data on tumor development and patterns of metastases. MTB curators acquire data through manual curation of peer-reviewed scientific literature and from direct submissions by researchers. Data in MTB are also obtained from other bioinformatics resources including PathBase, the Gene Expression Omnibus and ArrayExpress. Recent enhancements to MTB improve the association between mouse models and human genes commonly mutated in a variety of cancers as identified in large-scale cancer genomics studies, provide new interfaces for exploring regions of the mouse genome associated with cancer phenotypes and incorporate data and information related to Patient-Derived Xenograft models of human cancers. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. A distributed cloud-based cyberinfrastructure framework for integrated bridge monitoring

    NASA Astrophysics Data System (ADS)

    Jeong, Seongwoon; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.

    2017-04-01

    This paper describes a cloud-based cyberinfrastructure framework for the management of the diverse data involved in bridge monitoring. Bridge monitoring involves various hardware systems, software tools and laborious activities that include, for examples, a structural health monitoring (SHM), sensor network, engineering analysis programs and visual inspection. Very often, these monitoring systems, tools and activities are not coordinated, and the collected information are not shared. A well-designed integrated data management framework can support the effective use of the data and, thereby, enhance bridge management and maintenance operations. The cloud-based cyberinfrastructure framework presented herein is designed to manage not only sensor measurement data acquired from the SHM system, but also other relevant information, such as bridge engineering model and traffic videos, in an integrated manner. For the scalability and flexibility, cloud computing services and distributed database systems are employed. The information stored can be accessed through standard web interfaces. For demonstration, the cyberinfrastructure system is implemented for the monitoring of the bridges located along the I-275 Corridor in the state of Michigan.

  17. Damage Detection for Historical Architectures Based on Tls Intensity Data

    NASA Astrophysics Data System (ADS)

    Li, Q.; Cheng, X.

    2018-04-01

    TLS (Terrestrial Laser Scanner) has long been preferred in the cultural heritage field for 3D documentation of historical sites thanks to its ability to acquire the geometric information without any physical contact. Besides the geometric information, most TLS systems also record the intensity information, which is considered as an important measurement of the spectral property of the scanned surface. Recent studies have shown the potential of using intensity for damage detection. However, the original intensity is affected by scanning geometry such as range and incidence angle and other factors, thus making the results less accurate. Therefore, in this paper, we present a method to detect certain damage areas using the corrected intensity data. Firstly, two data-driven models have been developed to correct the range and incidence angle effect. Then the corrected intensity is used to generate 2D intensity images for classification. After the damage areas being detected, they are re-projected to the 3D point cloud for better visual representation and further investigation. The experiment results indicate the feasibility and validity of the corrected intensity for damage detection.

  18. Multimodal fusion of polynomial classifiers for automatic person recgonition

    NASA Astrophysics Data System (ADS)

    Broun, Charles C.; Zhang, Xiaozheng

    2001-03-01

    With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late integration approach, based on a probabilistic model, is employed to combine the two modalities. The system is tested on the XM2VTS database combined with AWGN in the audio domain over a range of signal-to-noise ratios.

  19. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    PubMed Central

    Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.

    2017-01-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  20. Active retinitis in an infant with postnatally acquired cytomegalovirus infection.

    PubMed

    Piersigilli, F; Catena, G; De Gasperis, M R; Lozzi, S; Auriti, C

    2012-07-01

    Congenital cytomegalovirus (CMV) is frequently associated with active retinitis. In contrast, in the immunocompetent neonate with postnatally acquired CMV infection retinitis is rarely present and usually does not progress. We describe the case of an infant with postnatal CMV infection and active retinitis diagnosed at 20 days of life. Owing to the rapid progression of the retinitis, therapy with intravenous ganciclovir was performed, with prompt regression of the retinitis. Therapy was then continued with oral valganciclovir for one further week. Although very unusual, CMV retinitis has to be taken into consideration in neonates with early postnatally acquired CMV infection, as an early diagnosis and treatment may be crucial to avoid visual impairment.

  1. Nardò Ring, Italy

    NASA Image and Video Library

    2008-04-08

    The Nardò Ring is a striking visual feature from space, and astronauts have photographed it several times. The Ring is a race car test track in Italy. This image was acquired by NASA Terra satellite on August 17. 2007.

  2. Reading without words or target detection? A re-analysis and replication fMRI study of the Landolt paradigm.

    PubMed

    Heim, Stefan; von Tongeln, Franziska; Hillen, Rebekka; Horbach, Josefine; Radach, Ralph; Günther, Thomas

    2018-06-19

    The Landolt paradigm is a visual scanning task intended to evoke reading-like eye-movements in the absence of orthographic or lexical information, thus allowing the dissociation of (sub-) lexical vs. visual processing. To that end, all letters in real word sentences are exchanged for closed Landolt rings, with 0, 1, or 2 open Landolt rings as targets in each Landolt sentence. A preliminary fMRI block-design study (Hillen et al. in Front Hum Neurosci 7:1-14, 2013) demonstrated that the Landolt paradigm has a special neural signature, recruiting the right IPS and SPL as part of the endogenous attention network. However, in that analysis, the brain responses to target detection could not be separated from those involved in processing Landolt stimuli without targets. The present study presents two fMRI experiments testing the question whether targets or the Landolt stimuli per se, led to the right IPS/SPL activation. Experiment 1 was an event-related re-analysis of the Hillen et al. (Front Hum Neurosci 7:1-14, 2013) data. Experiment 2 was a replication study with a new sample and identical procedures. In both experiments, the right IPS/SPL were recruited in the Landolt condition as compared to orthographic stimuli even in the absence of any target in the stimulus, indicating that the properties of the Landolt task itself trigger this right parietal activation. These findings are discussed against the background of behavioural and neuroimaging studies of healthy reading as well as developmental and acquired dyslexia. Consequently, this neuroimaging evidence might encourage the use of the Landolt paradigm also in the context of examining reading disorders, as it taps into the orientation of visual attention during reading-like scanning of stimuli without interfering sub-lexical information.

  3. Fixation Biases towards the Index Finger in Almost-Natural Grasping

    PubMed Central

    Voudouris, Dimitris; Smeets, Jeroen B. J.; Brenner, Eli

    2016-01-01

    We use visual information to guide our grasping movements. When grasping an object with a precision grip, the two digits need to reach two different positions more or less simultaneously, but the eyes can only be directed to one position at a time. Several studies that have examined eye movements in grasping have found that people tend to direct their gaze near where their index finger will contact the object. Here we aimed at better understanding why people do so by asking participants to lift an object off a horizontal surface. They were to grasp the object with a precision grip while movements of their hand, eye and head were recorded. We confirmed that people tend to look closer to positions that a digit needs to reach more accurately. Moreover, we show that where they look as they reach for the object depends on where they were looking before, presumably because they try to minimize the time during which the eyes are moving so fast that no new visual information is acquired. Most importantly, we confirmed that people have a bias to direct gaze towards the index finger’s contact point rather than towards that of the thumb. In our study, this cannot be explained by the index finger contacting the object before the thumb. Instead, it appears to be because the index finger moves to a position that is hidden behind the object that is grasped, probably making this the place at which one is most likely to encounter unexpected problems that would benefit from visual guidance. However, this cannot explain the bias that was found in previous studies, where neither contact point was hidden, so it cannot be the only explanation for the bias. PMID:26766551

  4. Acquired prosopagnosia without word recognition deficits.

    PubMed

    Susilo, Tirta; Wright, Victoria; Tree, Jeremy J; Duchaine, Bradley

    2015-01-01

    It has long been suggested that face recognition relies on specialized mechanisms that are not involved in visual recognition of other object categories, including those that require expert, fine-grained discrimination at the exemplar level such as written words. But according to the recently proposed many-to-many theory of object recognition (MTMT), visual recognition of faces and words are carried out by common mechanisms [Behrmann, M., & Plaut, D. C. ( 2013 ). Distributed circuits, not circumscribed centers, mediate visual recognition. Trends in Cognitive Sciences, 17, 210-219]. MTMT acknowledges that face and word recognition are lateralized, but posits that the mechanisms that predominantly carry out face recognition still contribute to word recognition and vice versa. MTMT makes a key prediction, namely that acquired prosopagnosics should exhibit some measure of word recognition deficits. We tested this prediction by assessing written word recognition in five acquired prosopagnosic patients. Four patients had lesions limited to the right hemisphere while one had bilateral lesions with more pronounced lesions in the right hemisphere. The patients completed a total of seven word recognition tasks: two lexical decision tasks and five reading aloud tasks totalling more than 1200 trials. The performances of the four older patients (3 female, age range 50-64 years) were compared to those of 12 older controls (8 female, age range 56-66 years), while the performances of the younger prosopagnosic (male, 31 years) were compared to those of 14 younger controls (9 female, age range 20-33 years). We analysed all results at the single-patient level using Crawford's t-test. Across seven tasks, four prosopagnosics performed as quickly and accurately as controls. Our results demonstrate that acquired prosopagnosia can exist without word recognition deficits. These findings are inconsistent with a key prediction of MTMT. They instead support the hypothesis that face recognition is carried out by specialized mechanisms that do not contribute to recognition of written words.

  5. Magnetic resonance imaging (MRI) of PEM dehydration and gas manifold flooding during continuous fuel cell operation

    NASA Astrophysics Data System (ADS)

    Minard, Kevin R.; Viswanathan, Vilayanur V.; Majors, Paul D.; Wang, Li-Qiong; Rieke, Peter C.

    Magnetic resonance imaging (MRI) was employed for visualizing water inside a proton exchange membrane (PEM) fuel cell during 11.4 h of continuous operation with a constant load. Two-dimensional images acquired every 128 s revealed the formation of a dehydration front that propagated slowly over the surface of the fuel cell membrane-starting from gas inlets and progressing toward gas outlets. After traversing the entire PEM surface, channels in the gas manifold began to flood on the cathode side. To establish a qualitative understanding of these observations, acquired images were correlated to the current output and the operating characteristics of the fuel cell. Results demonstrate the power of MRI for visualizing changing water distributions during PEM fuel cell operation, and highlight its potential utility for studying the causes of cell failure and/or strategies of water management.

  6. 3D visualization of Thoraco-Lumbar Spinal Lesions in German Shepherd Dog

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azpiroz, J.; Krafft, J.; Cadena, M.

    2006-09-08

    Computed tomography (CT) has been found to be an excellent imaging modality due to its sensitivity to characterize the morphology of the spine in dogs. This technique is considered to be particularly helpful for diagnosing spinal cord atrophy and spinal stenosis. The three-dimensional visualization of organs and bones can significantly improve the diagnosis of certain diseases in dogs. CT images were acquired of a German shepherd's dog spinal cord to generate stacks and digitally process them to arrange them in a volume image. All imaging experiments were acquired using standard clinical protocols on a clinical CT scanner. The three-dimensional visualizationmore » allowed us to observe anatomical structures that otherwise are not possible to observe with two-dimensional images. The combination of an imaging modality like CT together with imaging processing techniques can be a powerful tool for the diagnosis of a number of animal diseases.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marenco, S.; Kraut, M.A.; Soher, B.J.

    To ascertain whether local changes in signal intensity seen with functional MRI (fMRI) were related to regional blood flow changes with PET, 45 normal male volunteers (ages 31-49) underwent both procedures during resting and bilateral visual stimulation. A single 4mm thick fMRI slice over the calcarine fissure was acquired with a gradient echo 60,60,40{prime} (TR,TE,{alpha}), on a GE Signa 1.5 T. Sixty images were acquired over 366 sec. The visual stimulator was turned on and off at intervals of 36 sec, with a stimulating frequency of 8 Hz. ROIs were drawn around clusters of pixels with high z-scores (pixel value-meanmore » over whole acquisition/SD). Several ROIs were drawn in each subject. Percent change in signal intensity was calculated as the intensity in the average of six {open_quotes}on{close_quotes} images over the average of six {open_quotes}off{close_quotes} images 100.« less

  8. Development of a system for acquiring, reconstructing, and visualizing three-dimensional ultrasonic angiograms

    NASA Astrophysics Data System (ADS)

    Edwards, Warren S.; Ritchie, Cameron J.; Kim, Yongmin; Mack, Laurence A.

    1995-04-01

    We have developed a three-dimensional (3D) imaging system using power Doppler (PD) ultrasound (US). This system can be used for visualizing and analyzing the vascular anatomy of parenchymal organs. To create the 3D PD images, we acquired a series of two-dimensional PD images from a commercial US scanner and recorded the position and orientation of each image using a 3D magnetic position sensor. Three-dimensional volumes were reconstructed using specially designed software and then volume rendered for display. We assessed the feasibility and geometric accuracy of our system with various flow phantoms. The system was then tested on a volunteer by scanning a transplanted kidney. The reconstructed volumes of the flow phantom contained less than 1 mm of geometric distortion and the 3D images of the transplanted kidney depicted the segmental, arcuate, and interlobar vessels.

  9. Acquired color vision and visual field defects in patients with ocular hypertension and early glaucoma

    PubMed Central

    Papaconstantinou, Dimitris; Georgalas, Ilias; Kalantzis, George; Karmiris, Efthimios; Koutsandrea, Chrysanthi; Diagourtas, Andreas; Ladas, Ioannis; Georgopoulos, Gerasimos

    2009-01-01

    Purpose: To study acquired color vision and visual field defects in patients with ocular hypertension (OH) and early glaucoma. Methods: In a prospective study we evaluated 99 eyes of 56 patients with OH without visual field defects and no hereditary color deficiencies, followed up for 4 to 6 years (mean = 4.7 ± 0.6 years). Color vision defects were studied using a special computer program for Farnsworth–Munsell 100 hue test and visual field tests were performed with Humphrey analyzer using program 30–2. Both tests were repeated every six months. Results: In fifty-six eyes, glaucomatous defects were observed during the follow-up period. There was a statistically significant difference in total error score (TES) between eyes that eventually developed glaucoma (157.89 ± 31.79) and OH eyes (75.51 ± 31.57) at the first examination (t value 12.816, p < 0.001). At the same time visual field indices were within normal limits in both groups. In the glaucomatous eyes the earliest statistical significant change in TES was identified at the first year of follow-up and was −20.62 ± 2.75 (t value 9.08, p < 0.001) while in OH eyes was −2.11 ± 4.36 (t value 1.1, p = 0.276). Pearson’s coefficient was high in all examinations and showed a direct correlation between TES and mean deviation and corrected pattern standard deviation in both groups. Conclusion: Quantitative analysis of color vision defects provides the possibility of follow-up and can prove a useful means for detecting early glaucomatous changes in patients with normal visual fields. PMID:19668575

  10. Three-Dimensional Computed Tomography (3–D CT) for Evaluation and Management of Children with Complex Chest Wall Anomalies: Useful Information or Just Pretty Pictures?

    PubMed Central

    Calloway, E. Hollin; Chhotani, Ali N.; Lee, Yueh Z.; Phillips, J. Duncan

    2013-01-01

    Purpose Shaded Surface Display (SSD) technology, with 3-D CT reconstruction, has been reported in a few small series of patients with congenital or acquired chest wall deformities. SSD images are visually attractive and educational, but many institutions are hesitant to utilize these secondary to cost and image data storage concerns. This study was designed to assess the true value of SSD to the patient, family, and operating surgeon, in the evaluation and management of these children. Methods Following IRB approval, we performed a retrospective review of records of 82 patients with chest wall deformities, evaluated with SSD, from 2002 to 2009. SSD usefulness, when compared with routine 2-D CT, was graded on a strict numerical scale from 0 (added no value besides education for the patient/family) to 3 (critical for surgical planning and patient management). Results There were 56 males and 26 females. Median age was 15.3 years (range: 0.6–41.1). Deformities included 56 pectus excavatum, 19 pectus carinatum, and 8 other/mixed deformities. 6 patients also had acquired asphyxiating thoracic dystrophy (AATD). Eleven (13%) had previous chest wall reconstructive surgery. In 25 (30%) patients, SSD was useful or critical. Findings underappreciated on 2-D images included: sternal abnormalities (29), rib abnormalities (28), and heterotopic calcifications (7). SSD changed or influenced operation choice (4), clarified bone versus soft tissue (3), helped clarify AATD (3), and aided in rib graft evaluation (2). Point biserial correlation coefficient analysis (Rpb) displayed significance for SSD usefulness in patients with previous chest repair surgery (Rpb=0.48, p≤0.001), AATD (Rpb=0.34, p=0.001), pectus carinatum (Rpb=0.27, p=0.008), and females (Rpb=0.19, p=0.044). Conclusions Shaded Surface Display, when used to evaluate children and young adults with congenital or acquired chest wall deformities, provides useful or critical information for surgical planning and patient management in almost 1/3 of patients, especially those requiring a second operation, with acquired asphyxiating thoracic dystrophy, pectus carinatum, and females. PMID:21496531

  11. When is visual information used to control locomotion when descending a kerb?

    PubMed

    Buckley, John G; Timmis, Matthew A; Scally, Andy J; Elliott, David B

    2011-04-18

    Descending kerbs during locomotion involves the regulation of appropriate foot placement before the kerb-edge and foot clearance over it. It also involves the modulation of gait output to ensure the body-mass is safely and smoothly lowered to the new level. Previous research has shown that vision is used in such adaptive gait tasks for feedforward planning, with vision from the lower visual field (lvf) used for online updating. The present study determined when lvf information is used to control/update locomotion when stepping from a kerb. 12 young adults stepped down a kerb during ongoing gait. Force sensitive resistors (attached to participants' feet) interfaced with an high-speed PDLC 'smart glass' sheet, allowed the lvf to be unpredictably occluded at either heel-contact of the penultimate or final step before the kerb-edge up to contact with the lower level. Analysis focussed on determining changes in foot placement distance before the kerb-edge, clearance over it, and in kinematic measures of the step down. Lvf occlusion from the instant of final step contact had no significant effect on any dependant variable (p>0.09). Occlusion of the lvf from the instant of penultimate step contact had a significant effect on foot clearance and on several kinematic measures, with findings consistent with participants becoming uncertain regarding relative horizontal location of the kerb-edge. These findings suggest concurrent feedback of the lower limb, kerb-edge, and/or floor area immediately in front/below the kerb is not used when stepping from a kerb during ongoing gait. Instead heel-clearance and pre-landing-kinematic parameters are determined/planned using lvf information acquired in the penultimate step during the approach to the kerb-edge, with information related to foot placement before the kerb-edge being the most salient.

  12. Initial eye movements during face identification are optimal and similar across cultures

    PubMed Central

    Or, Charles C.-F.; Peterson, Matthew F.; Eckstein, Miguel P.

    2015-01-01

    Culture influences not only human high-level cognitive processes but also low-level perceptual operations. Some perceptual operations, such as initial eye movements to faces, are critical for extraction of information supporting evolutionarily important tasks such as face identification. The extent of cultural effects on these crucial perceptual processes is unknown. Here, we report that the first gaze location for face identification was similar across East Asian and Western Caucasian cultural groups: Both fixated a featureless point between the eyes and the nose, with smaller between-group than within-group differences and with a small horizontal difference across cultures (8% of the interocular distance). We also show that individuals of both cultural groups initially fixated at a slightly higher point on Asian faces than on Caucasian faces. The initial fixations were found to be both fundamental in acquiring the majority of information for face identification and optimal, as accuracy deteriorated when observers held their gaze away from their preferred fixations. An ideal observer that integrated facial information with the human visual system's varying spatial resolution across the visual field showed a similar information distribution across faces of both races and predicted initial human fixations. The model consistently replicated the small vertical difference between human fixations to Asian and Caucasian faces but did not predict the small horizontal leftward bias of Caucasian observers. Together, the results suggest that initial eye movements during face identification may be driven by brain mechanisms aimed at maximizing accuracy, and less influenced by culture. The findings increase our understanding of the interplay between the brain's aims to optimally accomplish basic perceptual functions and to respond to sociocultural influences. PMID:26382003

  13. Construction of a multimodal CT-video chest model

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2014-03-01

    Bronchoscopy enables a number of minimally invasive chest procedures for diseases such as lung cancer and asthma. For example, using the bronchoscope's continuous video stream as a guide, a physician can navigate through the lung airways to examine general airway health, collect tissue samples, or administer a disease treatment. In addition, physicians can now use new image-guided intervention (IGI) systems, which draw upon both three-dimensional (3D) multi-detector computed tomography (MDCT) chest scans and bronchoscopic video, to assist with bronchoscope navigation. Unfortunately, little use is made of the acquired video stream, a potentially invaluable source of information. In addition, little effort has been made to link the bronchoscopic video stream to the detailed anatomical information given by a patient's 3D MDCT chest scan. We propose a method for constructing a multimodal CT-video model of the chest. After automatically computing a patient's 3D MDCT-based airway-tree model, the method next parses the available video data to generate a positional linkage between a sparse set of key video frames and airway path locations. Next, a fusion/mapping of the video's color mucosal information and MDCT-based endoluminal surfaces is performed. This results in the final multimodal CT-video chest model. The data structure constituting the model provides a history of those airway locations visited during bronchoscopy. It also provides for quick visual access to relevant sections of the airway wall by condensing large portions of endoscopic video into representative frames containing important structural and textural information. When examined with a set of interactive visualization tools, the resulting fused data structure provides a rich multimodal data source. We demonstrate the potential of the multimodal model with both phantom and human data.

  14. Integrating thematic web portal capabilities into the NASA Earthdata Web Infrastructure

    NASA Astrophysics Data System (ADS)

    Wong, M. M.; McLaughlin, B. D.; Huang, T.; Baynes, K.

    2015-12-01

    The National Aeronautics and Space Administration (NASA) acquires and distributes an abundance of Earth science data on a daily basis to a diverse user community worldwide. To assist the scientific community and general public in achieving a greater understanding of the interdisciplinary nature of Earth science and of key environmental and climate change topics, the NASA Earthdata web infrastructure is integrating new methods of presenting and providing access to Earth science information, data, research and results. This poster will present the process of integrating thematic web portal capabilities into the NASA Earthdata web infrastructure, with examples from the Sea Level Change Portal. The Sea Level Change Portal will be a source of current NASA research, data and information regarding sea level change. The portal will provide sea level change information through articles, graphics, videos and animations, an interactive tool to view and access sea level change data and a dashboard showing sea level change indicators. Earthdata is a part of the Earth Observing System Data and Information System (EOSDIS) project. EOSDIS is a key core capability in NASA's Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA's Earth science data from various sources - satellites, aircraft, field measurements, and various other programs. It is comprised of twelve Distributed Active Archive Centers (DAACs), Science Computing Facilities (SCFs), data discovery and service access client (Reverb and Earthdata Search), dataset directory (Global Change Master Directory - GCMD), near real-time data (Land Atmosphere Near real-time Capability for EOS - LANCE), Worldview (an imagery visualization interface), Global Imagery Browse Services, the Earthdata Code Collaborative and a host of other discipline specific data discovery, data access, data subsetting and visualization tools.

  15. Robustness of remote stress detection from visible spectrum recordings

    NASA Astrophysics Data System (ADS)

    Kaur, Balvinder; Moses, Sophia; Luthra, Megha; Ikonomidou, Vasiliki N.

    2016-05-01

    In our recent work, we have shown that it is possible to extract high fidelity timing information of the cardiac pulse wave from visible spectrum videos, which can then be used as a basis for stress detection. In that approach, we used both heart rate variability (HRV) metrics and the differential pulse transit time (dPTT) as indicators of the presence of stress. One of the main concerns in this analysis is its robustness in the presence of noise, as the remotely acquired signal that we call blood wave (BW) signal is degraded with respect to the signal acquired using contact sensors. In this work, we discuss the robustness of our metrics in the presence of multiplicative noise. Specifically, we study the effects of subtle motion due to respiration and changes in illumination levels due to light flickering on the BW signal, the HRV-driven features, and the dPTT. Our sensitivity study involved both Monte Carlo simulations and experimental data from human facial videos, and indicates that our metrics are robust even under moderate amounts of noise. Generated results will help the remote stress detection community with developing requirements for visual spectrum based stress detection systems.

  16. Development of a Smart Mobile Data Module for Fetal Monitoring in E-Healthcare.

    PubMed

    Houzé de l'Aulnoit, Agathe; Boudet, Samuel; Génin, Michaël; Gautier, Pierre-François; Schiro, Jessica; Houzé de l'Aulnoit, Denis; Beuscart, Régis

    2018-03-23

    The fetal heart rate (FHR) is a marker of fetal well-being in utero (when monitoring maternal and/or fetal pathologies) and during labor. Here, we developed a smart mobile data module for the remote acquisition and transmission (via a Wi-Fi or 4G connection) of FHR recordings, together with a web-based viewer for displaying the FHR datasets on a computer, smartphone or tablet. In order to define the features required by users, we modelled the fetal monitoring procedure (in home and hospital settings) via semi-structured interviews with midwives and obstetricians. Using this information, we developed a mobile data transfer module based on a Raspberry Pi. When connected to a standalone fetal monitor, the module acquires the FHR signal and sends it (via a Wi-Fi or a 3G/4G mobile internet connection) to a secure server within our hospital information system. The archived, digitized signal data are linked to the patient's electronic medical records. An HTML5/JavaScript web viewer converts the digitized FHR data into easily readable and interpretable graphs for viewing on a computer (running Windows, Linux or MacOS) or a mobile device (running Android, iOS or Windows Phone OS). The data can be viewed in real time or offline. The application includes tools required for correct interpretation of the data (signal loss calculation, scale adjustment, and precise measurements of the signal's characteristics). We performed a proof-of-concept case study of the transmission, reception and visualization of FHR data for a pregnant woman at 30 weeks of amenorrhea. She was hospitalized in the pregnancy assessment unit and FHR data were acquired three times a day with a Philips Avalon® FM30 fetal monitor. The prototype (Raspberry Pi) was connected to the fetal monitor's RS232 port. The emission and reception of prerecorded signals were tested and the web server correctly received the signals, and the FHR recording was visualized in real time on a computer, a tablet and smartphones (running Android and iOS) via the web viewer. This process did not perturb the hospital's computer network. There was no data delay or loss during a 60-min test. The web viewer was tested successfully in the various usage situations. The system was as user-friendly as expected, and enabled rapid, secure archiving. We have developed a system for the acquisition, transmission, recording and visualization of RCF data. Healthcare professionals can view the FHR data remotely on their computer, tablet or smartphone. Integration of FHR data into a hospital information system enables optimal, secure, long-term data archiving.

  17. Profiling Oman education data using data visualization technique

    NASA Astrophysics Data System (ADS)

    Alalawi, Sultan Juma Sultan; Shaharanee, Izwan Nizal Mohd; Jamil, Jastini Mohd

    2016-10-01

    This research works presents an innovative data visualization technique to understand and visualize the information of Oman's education data generated from the Ministry of Education Oman "Educational Portal". The Ministry of Education in Sultanate of Oman have huge databases contains massive information. The volume of data in the database increase yearly as many students, teachers and employees enter into the database. The task for discovering and analyzing these vast volumes of data becomes increasingly difficult. Information visualization and data mining offer a better ways in dealing with large volume of information. In this paper, an innovative information visualization technique is developed to visualize the complex multidimensional educational data. Microsoft Excel Dashboard, Visual Basic Application (VBA) and Pivot Table are utilized to visualize the data. Findings from the summarization of the data are presented, and it is argued that information visualization can help related stakeholders to become aware of hidden and interesting information from large amount of data drowning in their educational portal.

  18. Learning where to feed: the use of social information in flower-visiting Pallas' long-tongued bats (Glossophaga soricina).

    PubMed

    Rose, Andreas; Kolar, Miriam; Tschapka, Marco; Knörnschild, Mirjam

    2016-03-01

    Social learning is a widespread phenomenon among vertebrates that influences various patterns of behaviour and is often reported with respect to foraging behaviour. The use of social information by foraging bats was documented in insectivorous, carnivorous and frugivorous species, but there are little data whether flower-visiting nectarivorous bats (Phyllostomidae: Glossophaginae) can acquire information about food from other individuals. In this study, we conducted an experiment with a demonstrator-observer paradigm to investigate whether flower-visiting Pallas' long-tongued bats (Glossophaga soricina) are able to socially learn novel flower positions via observation of, or interaction with, knowledgeable conspecifics. The results demonstrate that flower-visiting G. soricina are able to use social information for the location of novel flower positions and can thereby reduce energy-costly search efforts. This social transmission is explainable as a result of local enhancement; learning bats might rely on both visual and echo-acoustical perception and are likely to eavesdrop on auditory cues that are emitted by feeding conspecifics. We additionally tested the spatial memory capacity of former demonstrator bats when retrieving a learned flower position, and the results indicate that flower-visiting bats remember a learned flower position after several weeks.

  19. Stereotactic radiation treatment planning and follow-up studies involving fused multimodality imaging.

    PubMed

    Hamm, Klaus D; Surber, Gunnar; Schmücking, Michael; Wurm, Reinhard E; Aschenbach, Rene; Kleinert, Gabriele; Niesen, A; Baum, Richard P

    2004-11-01

    Innovative new software solutions may enable image fusion to produce the desired data superposition for precise target definition and follow-up studies in radiosurgery/stereotactic radiotherapy in patients with intracranial lesions. The aim is to integrate the anatomical and functional information completely into the radiation treatment planning and to achieve an exact comparison for follow-up examinations. Special conditions and advantages of BrainLAB's fully automatic image fusion system are evaluated and described for this purpose. In 458 patients, the radiation treatment planning and some follow-up studies were performed using an automatic image fusion technique involving the use of different imaging modalities. Each fusion was visually checked and corrected as necessary. The computerized tomography (CT) scans for radiation treatment planning (slice thickness 1.25 mm), as well as stereotactic angiography for arteriovenous malformations, were acquired using head fixation with stereotactic arc or, in the case of stereotactic radiotherapy, with a relocatable stereotactic mask. Different magnetic resonance (MR) imaging sequences (T1, T2, and fluid-attenuated inversion-recovery images) and positron emission tomography (PET) scans were obtained without head fixation. Fusion results and the effects on radiation treatment planning and follow-up studies were analyzed. The precision level of the results of the automatic fusion depended primarily on the image quality, especially the slice thickness and the field homogeneity when using MR images, as well as on patient movement during data acquisition. Fully automated image fusion of different MR, CT, and PET studies was performed for each patient. Only in a few cases was it necessary to correct the fusion manually after visual evaluation. These corrections were minor and did not materially affect treatment planning. High-quality fusion of thin slices of a region of interest with a complete head data set could be performed easily. The target volume for radiation treatment planning could be accurately delineated using multimodal information provided by CT, MR, angiography, and PET studies. The fusion of follow-up image data sets yielded results that could be successfully compared and quantitatively evaluated. Depending on the quality of the originally acquired image, automated image fusion can be a very valuable tool, allowing for fast (approximately 1-2 minute) and precise fusion of all relevant data sets. Fused multimodality imaging improves the target volume definition for radiation treatment planning. High-quality follow-up image data sets should be acquired for image fusion to provide exactly comparable slices and volumetric results that will contribute to quality contol.

  20. Connectionist neuropsychology: uncovering ultimate causes of acquired dyslexia.

    PubMed

    Woollams, Anna M

    2014-01-01

    Acquired dyslexia offers a unique window on to the nature of the cognitive and neural architecture supporting skilled reading. This paper provides an integrative overview of recent empirical and computational work on acquired dyslexia within the context of the primary systems framework as implemented in connectionist neuropsychological models. This view proposes that damage to general visual, phonological or semantic processing abilities are the root causes of different forms of acquired dyslexia. Recent case-series behavioural evidence concerning pure alexia, phonological dyslexia and surface dyslexia that supports this perspective is presented. Lesion simulations of these findings within connectionist models of reading demonstrate the viability of this approach. The commitment of such models to learnt representations allows them to capture key aspects of performance in each type of acquired dyslexia, particularly the associated non-reading deficits, the role of relearning and the influence of individual differences in the premorbid state of the reading system. Identification of these factors not only advances our understanding of acquired dyslexia and the mechanisms of normal reading but they are also relevant to the complex interactions underpinning developmental reading disorders.

  1. Connectionist neuropsychology: uncovering ultimate causes of acquired dyslexia

    PubMed Central

    Woollams, Anna M.

    2014-01-01

    Acquired dyslexia offers a unique window on to the nature of the cognitive and neural architecture supporting skilled reading. This paper provides an integrative overview of recent empirical and computational work on acquired dyslexia within the context of the primary systems framework as implemented in connectionist neuropsychological models. This view proposes that damage to general visual, phonological or semantic processing abilities are the root causes of different forms of acquired dyslexia. Recent case-series behavioural evidence concerning pure alexia, phonological dyslexia and surface dyslexia that supports this perspective is presented. Lesion simulations of these findings within connectionist models of reading demonstrate the viability of this approach. The commitment of such models to learnt representations allows them to capture key aspects of performance in each type of acquired dyslexia, particularly the associated non-reading deficits, the role of relearning and the influence of individual differences in the premorbid state of the reading system. Identification of these factors not only advances our understanding of acquired dyslexia and the mechanisms of normal reading but they are also relevant to the complex interactions underpinning developmental reading disorders. PMID:24324241

  2. Retinal angiography with real-time speckle variance optical coherence tomography.

    PubMed

    Xu, Jing; Han, Sherry; Balaratnasingam, Chandrakumar; Mammo, Zaid; Wong, Kevin S K; Lee, Sieun; Cua, Michelle; Young, Mei; Kirker, Andrew; Albiani, David; Forooghian, Farzin; Mackenzie, Paul; Merkur, Andrew; Yu, Dao-Yi; Sarunic, Marinko V

    2015-10-01

    This report describes a novel, non-invasive and label-free optical imaging technique, speckle variance optical coherence tomography (svOCT), for visualising blood flow within human retinal capillary networks. This imaging system uses a custom-built swept source OCT system operating at a line rate of 100 kHz. Real-time processing and visualisation is implemented on a consumer grade graphics processing unit. To investigate the quality of microvascular detail acquired with this device we compared images of human capillary networks acquired with svOCT and fluorescein angiography. We found that the density of capillary microvasculature acquired with this svOCT device was visibly greater than fluorescein angiography. We also found that this svOCT device had the capacity to generate en face images of distinct capillary networks that are morphologically comparable with previously published histological studies. Finally, we found that this svOCT device has the ability to non-invasively illustrate the common manifestations of diabetic retinopathy and retinal vascular occlusion. The results of this study suggest that graphics processing unit accelerated svOCT has the potential to non-invasively provide useful quantitative information about human retinal capillary networks. Therefore svOCT may have clinical and research applications for the management of retinal microvascular diseases, which are a major cause of visual morbidity worldwide. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Visualization of tumor vascular reactivity in response to respiratory challenges by optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Hoon Sup; Lee, Songhyun; Lee, Kiri; Eom, Tae Joong; Kim, Jae G.

    2016-02-01

    We previously reported the potential of using vascular reactivity during respiratory challenges as a marker to predict the response of breast tumor to chemotherapy in a rat model by using a continuous wave near-infrared spectroscopy. However, it cannot visualize how the vascular reactivity from tumor vessel can predict the tumor response to its treatment. In this study, we utilized a spectral domain optical coherence tomography (SD-OCT) system to visualize vascular reactivity of both tumor and normal vasculature during respiratory challenges in a mouse model. We adapted intensity based Doppler variance algorithm to draw angiogram from the ear of mouse (8-week-old Balb/c nu/nu). Animals were anesthetized using 1.5% isoflurane, and the body temperature was maintained by a heating pad. Inhalational gas was switched from air (10min) to 100% oxygen (10min), and a pulse oximeter was used to monitor arterial oxygen saturation and heart rate. OCT angiograms were acquired 5 min after the onset of each gas. The vasoconstriction effect of hyperoxic gas on vasculature was shown by subtracting an en-face image acquired during 100% oxygen from the image acquired during air inhalation. The quantitative change in the vessel diameter was measured from the en-face OCT images of the individual blood vessels. The percentage of blood vessel diameter reduction varied from 1% to 12% depending on arterial, capillary, or venous blood vessel. The vascular reactivity change during breast tumor progression and post chemotherapy will be monitored by OCT angiography.

  4. GPU accelerated optical coherence tomography angiography using strip-based registration (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Heisler, Morgan; Lee, Sieun; Mammo, Zaid; Jian, Yifan; Ju, Myeong Jin; Miao, Dongkai; Raposo, Eric; Wahl, Daniel J.; Merkur, Andrew; Navajas, Eduardo; Balaratnasingam, Chandrakumar; Beg, Mirza Faisal; Sarunic, Marinko V.

    2017-02-01

    High quality visualization of the retinal microvasculature can improve our understanding of the onset and development of retinal vascular diseases, which are a major cause of visual morbidity and are increasing in prevalence. Optical Coherence Tomography Angiography (OCT-A) images are acquired over multiple seconds and are particularly susceptible to motion artifacts, which are more prevalent when imaging patients with pathology whose ability to fixate is limited. The acquisition of multiple OCT-A images sequentially can be performed for the purpose of removing motion artifact and increasing the contrast of the vascular network through averaging. Due to the motion artifacts, a robust registration pipeline is needed before feature preserving image averaging can be performed. In this report, we present a novel method for a GPU-accelerated pipeline for acquisition, processing, segmentation, and registration of multiple, sequentially acquired OCT-A images to correct for the motion artifacts in individual images for the purpose of averaging. High performance computing, blending CPU and GPU, was introduced to accelerate processing in order to provide high quality visualization of the retinal microvasculature and to enable a more accurate quantitative analysis in a clinically useful time frame. Specifically, image discontinuities caused by rapid micro-saccadic movements and image warping due to smoother reflex movements were corrected by strip-wise affine registration estimated using Scale Invariant Feature Transform (SIFT) keypoints and subsequent local similarity-based non-rigid registration. These techniques improve the image quality, increasing the value for clinical diagnosis and increasing the range of patients for whom high quality OCT-A images can be acquired.

  5. Satisfactory rate of post-processing visualization of fetal cerebral axial, sagittal, and coronal planes from three-dimensional volumes acquired in routine second trimester ultrasound practice by sonographers of peripheral centers.

    PubMed

    Rizzo, Giuseppe; Pietrolucci, Maria Elena; Capece, Giuseppe; Cimmino, Ernesto; Colosi, Enrico; Ferrentino, Salvatore; Sica, Carmine; Di Meglio, Aniello; Arduini, Domenico

    2011-08-01

    The aim of this study was to evaluate the feasibility to visualize central nervous system (CNS) diagnostic planes from three-dimensional (3D) brain volumes obtained in ultrasound facilities with no specific experience in fetal neurosonography. Five sonographers prospectively recorded transabdominal 3D CNS volumes starting from an axial approach on 500 consecutive pregnancies at 19-24 weeks of gestation undergoing routine ultrasound examination. Volumes were sent to the referral center (Department of Obstetrics and Gynecology, Università Roma Tor Vergata, Italy) and two independent reviewers with experience in 3D ultrasound assessed their quality in the display of axial, coronal, and sagittal planes. CNS volumes were acquired in 491/500 pregnancies (98.2%). The two reviewers acknowledged the presence of satisfactory images with a visualization rate ranging respectively between 95.1% and 97.14% for axial planes, 73.72% and 87.16% for coronal planes, and 78.41% and 94.29% for sagittal planes. The agreement rate between the two reviewers as expressed by Cohen's kappa coefficient was >0.87 for axial planes, >0.89 for coronal planes, and >0.94 for sagittal planes. The presence of a maternal body mass index >30 alters the probability of achieving satisfactory CNS views, while existence of previous maternal lower abdomen surgery does not affect the quality of the reconstructed planes. CNS volumes acquired by 3D ultrasonography in peripheral centers showed a quality high enough to allow a detailed fetal neurosonogram.

  6. Machine assisted histogram classification

    NASA Astrophysics Data System (ADS)

    Benyó, B.; Gaspar, C.; Somogyi, P.

    2010-04-01

    LHCb is one of the four major experiments under completion at the Large Hadron Collider (LHC). Monitoring the quality of the acquired data is important, because it allows the verification of the detector performance. Anomalies, such as missing values or unexpected distributions can be indicators of a malfunctioning detector, resulting in poor data quality. Spotting faulty or ageing components can be either done visually using instruments, such as the LHCb Histogram Presenter, or with the help of automated tools. In order to assist detector experts in handling the vast monitoring information resulting from the sheer size of the detector, we propose a graph based clustering tool combined with machine learning algorithm and demonstrate its use by processing histograms representing 2D hitmaps events. We prove the concept by detecting ion feedback events in the LHCb experiment's RICH subdetector.

  7. Geologic map of Mars

    USGS Publications Warehouse

    Tanaka, Kenneth L.; Skinner, James A.; Dohm, James M.; Irwin, Rossman P.; Kolb, Eric J.; Fortezzo, Corey M.; Platz, Thomas; Michael, Gregory G.; Hare, Trent M.

    2014-01-01

    This global geologic map of Mars, which records the distribution of geologic units and landforms on the planet's surface through time, is based on unprecedented variety, quality, and quantity of remotely sensed data acquired since the Viking Orbiters. These data have provided morphologic, topographic, spectral, thermophysical, radar sounding, and other observations for integration, analysis, and interpretation in support of geologic mapping. In particular, the precise topographic mapping now available has enabled consistent morphologic portrayal of the surface for global mapping (whereas previously used visual-range image bases were less effective, because they combined morphologic and albedo information and, locally, atmospheric haze). Also, thermal infrared image bases used for this map tended to be less affected by atmospheric haze and thus are reliable for analysis of surface morphology and texture at even higher resolution than the topographic products.

  8. Interfering with theories of sleep and memory: sleep, declarative memory, and associative interference.

    PubMed

    Ellenbogen, Jeffrey M; Hulbert, Justin C; Stickgold, Robert; Dinges, David F; Thompson-Schill, Sharon L

    2006-07-11

    Mounting behavioral evidence in humans supports the claim that sleep leads to improvements in recently acquired, nondeclarative memories. Examples include motor-sequence learning; visual-discrimination learning; and perceptual learning of a synthetic language. In contrast, there are limited human data supporting a benefit of sleep for declarative (hippocampus-mediated) memory in humans (for review, see). This is particularly surprising given that animal models (e.g.,) and neuroimaging studies (e.g.,) predict that sleep facilitates hippocampus-based memory consolidation. We hypothesized that we could unmask the benefits of sleep by challenging the declarative memory system with competing information (interference). This is the first study to demonstrate that sleep protects declarative memories from subsequent associative interference, and it has important implications for understanding the neurobiology of memory consolidation.

  9. Imaging and characterizing cells using tomography

    PubMed Central

    Do, Myan; Isaacson, Samuel A.; McDermott, Gerry; Le Gros, Mark A.; Larabell, Carolyn A.

    2015-01-01

    We can learn much about cell function by imaging and quantifying sub-cellular structures, especially if this is done non-destructively without altering said structures. Soft x-ray tomography (SXT) is a high-resolution imaging technique for visualizing cells and their interior structure in 3D. A tomogram of the cell, reconstructed from a series of 2D projection images, can be easily segmented and analyzed. SXT has a very high specimen throughput compared to other high-resolution structure imaging modalities; for example, tomographic data for reconstructing an entire eukaryotic cell is acquired in a matter of minutes. SXT visualizes cells without the need for chemical fixation, dehydration, or staining of the specimen. As a result, the SXT reconstructions are close representations of cells in their native state. SXT is applicable to most cell types. The deep penetration of soft x-rays allows cells, even mammalian cells, to be imaged without being sectioned. Image contrast in SXT is generated by the differential attenuation soft x-ray illumination as it passes through the specimen. Accordingly, each voxel in the tomographic reconstruction has a measured linear absorption coefficient (LAC) value. LAC values are quantitative and give rise to each sub-cellular component having a characteristic LAC profile, allowing organelles to be identified and segmented from the milieu of other cell contents. In this chapter, we describe the fundamentals of SXT imaging and how this technique can answer real world questions in the study of the nucleus. We also describe the development of correlative methods for the localization of specific molecules in a SXT reconstruction. The combination of fluorescence and SXT data acquired from the same specimen produces composite 3D images, rich with detailed information on the inner workings of cells. PMID:25602704

  10. Columnar Segregation of Magnocellular and Parvocellular Streams in Human Extrastriate Cortex

    PubMed Central

    2017-01-01

    Magnocellular versus parvocellular (M-P) streams are fundamental to the organization of macaque visual cortex. Segregated, paired M-P streams extend from retina through LGN into V1. The M stream extends further into area V5/MT, and parts of V2. However, elsewhere in visual cortex, it remains unclear whether M-P-derived information (1) becomes intermixed or (2) remains segregated in M-P-dominated columns and neurons. Here we tested whether M-P streams exist in extrastriate cortical columns, in 8 human subjects (4 female). We acquired high-resolution fMRI at high field (7T), testing for M- and P-influenced columns within each of four cortical areas (V2, V3, V3A, and V4), based on known functional distinctions in M-P streams in macaque: (1) color versus luminance, (2) binocular disparity, (3) luminance contrast sensitivity, (4) peak spatial frequency, and (5) color/spatial interactions. Additional measurements of resting state activity (eyes closed) tested for segregated functional connections between these columns. We found M- and P-like functions and connections within and between segregated cortical columns in V2, V3, and (in most experiments) area V4. Area V3A was dominated by the M stream, without significant influence from the P stream. These results suggest that M-P streams exist, and extend through, specific columns in early/middle stages of human extrastriate cortex. SIGNIFICANCE STATEMENT The magnocellular and parvocellular (M-P) streams are fundamental components of primate visual cortical organization. These streams segregate both anatomical and functional properties in parallel, from retina through primary visual cortex. However, in most higher-order cortical sites, it is unknown whether such M-P streams exist and/or what form those streams would take. Moreover, it is unknown whether M-P streams exist in human cortex. Here, fMRI evidence measured at high field (7T) and high resolution revealed segregated M-P streams in four areas of human extrastriate cortex. These results suggest that M-P information is processed in segregated parallel channels throughout much of human visual cortex; the M-P streams are more than a convenient sorting property in earlier stages of the visual system. PMID:28724749

  11. Perceived orientation in physical and virtual environments: changes in perceived orientation as a function of idiothetic information available

    NASA Technical Reports Server (NTRS)

    Lathrop, William B.; Kaiser, Mary K.

    2002-01-01

    Two experiments examined perceived spatial orientation in a small environment as a function of experiencing that environment under three conditions: real-world, desktop-display (DD), and head-mounted display (HMD). Across the three conditions, participants acquired two targets located on a perimeter surrounding them, and attempted to remember the relative locations of the targets. Subsequently, participants were tested on how accurately and consistently they could point in the remembered direction of a previously seen target. Results showed that participants were significantly more consistent in the real-world and HMD conditions than in the DD condition. Further, it is shown that the advantages observed in the HMD and real-world conditions were not simply due to nonspatial response strategies. These results suggest that the additional idiothetic information afforded in the real-world and HMD conditions is useful for orientation purposes in our presented task domain. Our results are relevant to interface design issues concerning tasks that require spatial search, navigation, and visualization.

  12. In-vivo gingival sulcus imaging using full-range, complex-conjugate-free, endoscopic spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Zhang, Kang; Yi, WonJin; Kang, Jin U.

    2012-01-01

    Frequent monitoring of gingival sulcus will provide valuable information for judging the presence and severity of periodontal disease. Optical coherence tomography, as a 3D high resolution high speed imaging modality is able to provide information for pocket depth, gum contour, gum texture, gum recession simultaneously. A handheld forward-viewing miniature resonant fiber-scanning probe was developed for in-vivo gingival sulcus imaging. The fiber cantilever driven by magnetic force vibrates at resonant frequency. A synchronized linear phase-modulation was applied in the reference arm by the galvanometer-driven reference mirror. Full-range, complex-conjugate-free, real-time endoscopic SD-OCT was achieved by accelerating the data process using graphics processing unit. Preliminary results showed a real-time in-vivo imaging at 33 fps with an imaging range of lateral 2 mm by depth 3 mm. Gap between the tooth and gum area was clearly visualized. Further quantification analysis of the gingival sulcus will be performed on the image acquired.

  13. Cortical plasticity associated with Braille learning.

    PubMed

    Hamilton, R H; Pascual-Leone, A

    1998-05-01

    Blind subjects who learn to read Braille must acquire the ability to extract spatial information from subtle tactile stimuli. In order to accomplish this, neuroplastic changes appear to take place. During Braille learning, the sensorimotor cortical area devoted to the representation of the reading finger enlarges. This enlargement follows a two-step process that can be demonstrated with transcranial magnetic stimulation mapping and suggests initial unmasking of existing connections and eventual establishment of more stable structural changes. In addition, Braille learning appears to be associated with the recruitment of parts of the occipital, formerly `visual', cortex (V1 and V2) for tactile information processing. In blind, proficient Braille readers, the occipital cortex can be shown not only to be associated with tactile Braille reading but also to be critical for reading accuracy. Recent studies suggest the possibility of applying non-invasive neurophysiological techniques to guide and improve functional outcomes of these plastic changes. Such interventions might provide a means of accelerating functional adjustment to blindness.

  14. Analysis of lipid experiments (ALEX): a software framework for analysis of high-resolution shotgun lipidomics data.

    PubMed

    Husen, Peter; Tarasov, Kirill; Katafiasz, Maciej; Sokol, Elena; Vogt, Johannes; Baumgart, Jan; Nitsch, Robert; Ekroos, Kim; Ejsing, Christer S

    2013-01-01

    Global lipidomics analysis across large sample sizes produces high-content datasets that require dedicated software tools supporting lipid identification and quantification, efficient data management and lipidome visualization. Here we present a novel software-based platform for streamlined data processing, management and visualization of shotgun lipidomics data acquired using high-resolution Orbitrap mass spectrometry. The platform features the ALEX framework designed for automated identification and export of lipid species intensity directly from proprietary mass spectral data files, and an auxiliary workflow using database exploration tools for integration of sample information, computation of lipid abundance and lipidome visualization. A key feature of the platform is the organization of lipidomics data in "database table format" which provides the user with an unsurpassed flexibility for rapid lipidome navigation using selected features within the dataset. To demonstrate the efficacy of the platform, we present a comparative neurolipidomics study of cerebellum, hippocampus and somatosensory barrel cortex (S1BF) from wild-type and knockout mice devoid of the putative lipid phosphate phosphatase PRG-1 (plasticity related gene-1). The presented framework is generic, extendable to processing and integration of other lipidomic data structures, can be interfaced with post-processing protocols supporting statistical testing and multivariate analysis, and can serve as an avenue for disseminating lipidomics data within the scientific community. The ALEX software is available at www.msLipidomics.info.

  15. Rapid and visual detection of the main chemical compositions in maize seeds based on Raman hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Yang, Guiyan; Wang, Qingyan; Liu, Chen; Wang, Xiaobin; Fan, Shuxiang; Huang, Wenqian

    2018-07-01

    Rapid and visual detection of the chemical compositions of plant seeds is important but difficult for a traditional seed quality analysis system. In this study, a custom-designed line-scan Raman hyperspectral imaging system was applied for detecting and displaying the main chemical compositions in a heterogeneous maize seed. Raman hyperspectral images collected from the endosperm and embryo of maize seed were acquired and preprocessed by Savitzky-Golay (SG) filter and adaptive iteratively reweighted Penalized Least Squares (airPLS). Three varieties of maize seeds were analyzed, and the characteristics of the spectral and spatial information were extracted from each hyperspectral image. The Raman characteristic peaks, identified at 477, 1443, 1522, 1596 and 1654 cm-1 from 380 to 1800 cm-1 Raman spectra, were related to corn starch, mixture of oil and starch, zeaxanthin, lignin and oil in maize seeds, respectively. Each single-band image corresponding to the characteristic band characterized the spatial distribution of the chemical composition in a seed successfully. The embryo was distinguished from the endosperm by band operation of the single-band images at 477, 1443, and 1596 cm-1 for each variety. Results showed that Raman hyperspectral imaging system could be used for on-line quality control of maize seeds based on the rapid and visual detection of the chemical compositions in maize seeds.

  16. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  17. A tactile display for international space station (ISS) extravehicular activity (EVA).

    PubMed

    Rochlis, J L; Newman, D J

    2000-06-01

    A tactile display to increase an astronaut's situational awareness during an extravehicular activity (EVA) has been developed and ground tested. The Tactor Locator System (TLS) is a non-intrusive, intuitive display capable of conveying position and velocity information via a vibrotactile stimulus applied to the subject's neck and torso. In the Earth's 1 G environment, perception of position and velocity is determined by the body's individual sensory systems. Under normal sensory conditions, redundant information from these sensory systems provides humans with an accurate sense of their position and motion. However, altered environments, including exposure to weightlessness, can lead to conflicting visual and vestibular cues, resulting in decreased situational awareness. The TLS was designed to provide somatosensory cues to complement the visual system during EVA operations. An EVA task was simulated on a computer graphics workstation with a display of the International Space Station (ISS) and a target astronaut at an unknown location. Subjects were required to move about the ISS and acquire the target astronaut using either an auditory cue at the outset, or the TLS. Subjects used a 6 degree of freedom input device to command translational and rotational motion. The TLS was configured to act as a position aid, providing target direction information to the subject through a localized stimulus. Results show that the TLS decreases reaction time (p = 0.001) and movement time (p = 0.001) for simulated subject (astronaut) motion around the ISS. The TLS is a useful aid in increasing an astronaut's situational awareness, and warrants further testing to explore other uses, tasks and configurations.

  18. Hemispheric asymmetries of a motor memory in a recognition test after learning a movement sequence.

    PubMed

    Leinen, Peter; Panzer, Stefan; Shea, Charles H

    2016-11-01

    Two experiments utilizing a spatial-temporal movement sequence were designed to determine if the memory of the sequence is lateralized in the left or right hemisphere. In Experiment 1, dominant right-handers were randomly assigned to one of two acquisition groups: a left-hand starter and a right-hand starter group. After an acquisition phase, reaction time (RT) was measured in a recognition test by providing the learned sequential pattern in the left or right visual half-field for 150ms. In a retention test and two transfer tests the dominant coordinate system for sequence production was evaluated. In Experiment 2 dominant left-handers and dominant right-handers had to acquire the sequence with their dominant limb. The results of Experiment 1 indicated that RT was significantly shorter when the acquired sequence was provided in the right visual field during the recognition test. The same results occurred in Experiment 2 for dominant right-handers and left-handers. These results indicated a right visual field left hemisphere advantage in the recognition test for the practiced stimulus for dominant left and right-handers, when the task was practiced with the dominant limb. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Using Visual Odometry to Estimate Position and Attitude

    NASA Technical Reports Server (NTRS)

    Maimone, Mark; Cheng, Yang; Matthies, Larry; Schoppers, Marcel; Olson, Clark

    2007-01-01

    A computer program in the guidance system of a mobile robot generates estimates of the position and attitude of the robot, using features of the terrain on which the robot is moving, by processing digitized images acquired by a stereoscopic pair of electronic cameras mounted rigidly on the robot. Developed for use in localizing the Mars Exploration Rover (MER) vehicles on Martian terrain, the program can also be used for similar purposes on terrestrial robots moving in sufficiently visually textured environments: examples include low-flying robotic aircraft and wheeled robots moving on rocky terrain or inside buildings. In simplified terms, the program automatically detects visual features and tracks them across stereoscopic pairs of images acquired by the cameras. The 3D locations of the tracked features are then robustly processed into an estimate of overall vehicle motion. Testing has shown that by use of this software, the error in the estimate of the position of the robot can be limited to no more than 2 percent of the distance traveled, provided that the terrain is sufficiently rich in features. This software has proven extremely useful on the MER vehicles during driving on sandy and highly sloped terrains on Mars.

  20. Omission P3 after voluntary action indexes the formation of action-driven prediction.

    PubMed

    Kimura, Motohiro; Takeda, Yuji

    2018-02-01

    When humans frequently experience a certain sensory effect after a certain action, a bidirectional association between neural representations of the action and the sensory effect is rapidly acquired, which enables action-driven prediction of the sensory effect. The present study aimed to test whether or not omission P3, an event-related brain potential (ERP) elicited by the sudden omission of a sensory effect, is sensitive to the formation of action-driven prediction. For this purpose, we examined how omission P3 is affected by the number of possible visual effects. In four separate blocks (1-, 2-, 4-, and 8-stimulus blocks), participants successively pressed a right button at an interval of about 1s. In all blocks, each button press triggered a bar on a display (a bar with square edges, 85%; a bar with round edges, 5%), but occasionally did not (sudden omission of a visual effect, 10%). Participants were required to press a left button when a bar with round edges appeared. In the 1-stimulus block, the orientation of the bar was fixed throughout the block; in the 2-, 4-, and 8-stimulus blocks, the orientation was randomly varied among two, four, and eight possibilities, respectively. Omission P3 in the 1-stimulus block was greater than those in the 2-, 4-, and 8-stimulus blocks; there were no significant differences among the 2-, 4-, and 8-stimulus blocks. This binary pattern nicely fits the limitation in the acquisition of action-effect association; although an association between an action and one visual effect is easily acquired, associations between an action and two or more visual effects cannot be acquired concurrently. Taken together, the present results suggest that omission P3 is highly sensitive to the formation of action-driven prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. 32 CFR 811.8 - Forms prescribed and availability of publications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... FORCE SALES AND SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.8 Forms prescribed and availability of publications. (a) AF Form 833, Visual Information Request, AF Form 1340, Visual Information Support Center Workload Report, DD Form 1995, Visual Information (VI) Production...

  2. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  3. A kilohertz approach to Strombolian-style eruptions

    NASA Astrophysics Data System (ADS)

    Taddeucci, Jacopo; Scarlato, Piergiorgio; Del Bello, Elisabetta; Gaudin, Damien

    2015-04-01

    Accessible volcanoes characterized by persistent, relatively mild Strombolian-style explosive activity have historically hosted multidisciplinary studies of eruptions. These studies, focused on geophysical signals preceding, accompanying, and following the eruptions, have provided key insights on the physical processes driving the eruptions. However, the dynamic development of the single explosions that characterize this style of activity remained somewhat elusive, due to the timescales involved (order of 0.001 seconds). Recent technological advances now allow recording and synchronizing different data sources on time scales relevant to the short timescales involved in the explosions. In the last several years we developed and implemented a field setup that integrates visual and thermal imaging with acoustic and seismic recordings, all synchronized and acquired at timescales of 100-10000 Hz. This setup has been developed at several active volcanoes. On the one hand, the combination of these different techniques provides unique information on the dynamics and energetics of the explosions, including the parameterization of individual ejection pulses within the explosions, the ejection and emplacement of pyroclasts and their coupling-decoupling with the gas phases, the different stages of development of the eruption jets, and their reflection in the associated acoustic and seismic signals. On the other hand, the gained information provides foundation for better understanding and interpreting the signals acquired, at lower sampling rates but routinely, from volcano monitoring networks. Perhaps even more important, our approach allows parameterizing differences and commonalities in the explosions from different volcanoes and settings.

  4. A hybrid approach for fusing 4D-MRI temporal information with 3D-CT for the study of lung and lung tumor motion.

    PubMed

    Yang, Y X; Teo, S-K; Van Reeth, E; Tan, C H; Tham, I W K; Poh, C L

    2015-08-01

    Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors' proposed approach. A novel hybrid approach based on deformable image registration (DIR) and finite element method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors' proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.

  5. Precise photorealistic visualization for restoration of historic buildings based on tacheometry data

    NASA Astrophysics Data System (ADS)

    Ragia, Lemonia; Sarri, Froso; Mania, Katerina

    2018-03-01

    This paper puts forward a 3D reconstruction methodology applied to the restoration of historic buildings taking advantage of the speed, range and accuracy of a total geodetic station. The measurements representing geo-referenced points produced an interactive and photorealistic geometric mesh of a monument named `Neoria.' `Neoria' is a Venetian building located by the old harbor at Chania, Crete, Greece. The integration of tacheometry acquisition and computer graphics puts forward a novel integrated software framework for the accurate 3D reconstruction of a historical building. The main technical challenge of this work was the production of a precise 3D mesh based on a sufficient number of tacheometry measurements acquired fast and at low cost, employing a combination of surface reconstruction and processing methods. A fully interactive application based on game engine technologies was developed. The user can visualize and walk through the monument and the area around it as well as photorealistically view it at different times of day and night. Advanced interactive functionalities are offered to the user in relation to identifying restoration areas and visualizing the outcome of such works. The user could visualize the coordinates of the points measured, calculate distances and navigate through the complete 3D mesh of the monument. The geographical data are stored in a database connected with the application. Features referencing and associating the database with the monument are developed. The goal was to utilize a small number of acquired data points and present a fully interactive visualization of a geo-referenced 3D model.

  6. Precise photorealistic visualization for restoration of historic buildings based on tacheometry data

    NASA Astrophysics Data System (ADS)

    Ragia, Lemonia; Sarri, Froso; Mania, Katerina

    2018-04-01

    This paper puts forward a 3D reconstruction methodology applied to the restoration of historic buildings taking advantage of the speed, range and accuracy of a total geodetic station. The measurements representing geo-referenced points produced an interactive and photorealistic geometric mesh of a monument named `Neoria.' `Neoria' is a Venetian building located by the old harbor at Chania, Crete, Greece. The integration of tacheometry acquisition and computer graphics puts forward a novel integrated software framework for the accurate 3D reconstruction of a historical building. The main technical challenge of this work was the production of a precise 3D mesh based on a sufficient number of tacheometry measurements acquired fast and at low cost, employing a combination of surface reconstruction and processing methods. A fully interactive application based on game engine technologies was developed. The user can visualize and walk through the monument and the area around it as well as photorealistically view it at different times of day and night. Advanced interactive functionalities are offered to the user in relation to identifying restoration areas and visualizing the outcome of such works. The user could visualize the coordinates of the points measured, calculate distances and navigate through the complete 3D mesh of the monument. The geographical data are stored in a database connected with the application. Features referencing and associating the database with the monument are developed. The goal was to utilize a small number of acquired data points and present a fully interactive visualization of a geo-referenced 3D model.

  7. Information Technology Management: DoD Organization Information Assurance Management of Information Technology Goods and Services Acquired Through Interagency Agreements

    DTIC Science & Technology

    2006-02-23

    Information Technology Management Department of Defense Office of Inspector General February 23, 2006 AccountabilityIntegrityQuality DoD...Organization Information Assurance Management of Information Technology Goods and Services Acquired Through Interagency Agreements (D-2006-052) Report...REPORT TYPE 3. DATES COVERED 00-00-2006 to 00-00-2006 4. TITLE AND SUBTITLE Information Technology Management: DoD Organization Information

  8. Helical Axis Data Visualization and Analysis of the Knee Joint Articulation.

    PubMed

    Millán Vaquero, Ricardo Manuel; Vais, Alexander; Dean Lynch, Sean; Rzepecki, Jan; Friese, Karl-Ingo; Hurschler, Christof; Wolter, Franz-Erich

    2016-09-01

    We present processing methods and visualization techniques for accurately characterizing and interpreting kinematical data of flexion-extension motion of the knee joint based on helical axes. We make use of the Lie group of rigid body motions and particularly its Lie algebra for a natural representation of motion sequences. This allows to analyze and compute the finite helical axis (FHA) and instantaneous helical axis (IHA) in a unified way without redundant degrees of freedom or singularities. A polynomial fitting based on Legendre polynomials within the Lie algebra is applied to provide a smooth description of a given discrete knee motion sequence which is essential for obtaining stable instantaneous helical axes for further analysis. Moreover, this allows for an efficient overall similarity comparison across several motion sequences in order to differentiate among several cases. Our approach combines a specifically designed patient-specific three-dimensional visualization basing on the processed helical axes information and incorporating computed tomography (CT) scans for an intuitive interpretation of the axes and their geometrical relation with respect to the knee joint anatomy. In addition, in the context of the study of diseases affecting the musculoskeletal articulation, we propose to integrate the above tools into a multiscale framework for exploring related data sets distributed across multiple spatial scales. We demonstrate the utility of our methods, exemplarily processing a collection of motion sequences acquired from experimental data involving several surgery techniques. Our approach enables an accurate analysis, visualization and comparison of knee joint articulation, contributing to the evaluation and diagnosis in medical applications.

  9. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid

    PubMed Central

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot. PMID:26998923

  10. Training eye movements for visual search in individuals with macular degeneration

    PubMed Central

    Janssen, Christian P.; Verghese, Preeti

    2016-01-01

    We report a method to train individuals with central field loss due to macular degeneration to improve the efficiency of visual search. Our method requires participants to make a same/different judgment on two simple silhouettes. One silhouette is presented in an area that falls within the binocular scotoma while they are fixating the center of the screen with their preferred retinal locus (PRL); the other silhouette is presented diametrically opposite within the intact visual field. Over the course of 480 trials (approximately 6 hr), we gradually reduced the amount of time that participants have to make a saccade and judge the similarity of stimuli. This requires that they direct their PRL first toward the stimulus that is initially hidden behind the scotoma. Results from nine participants show that all participants could complete the task faster with training without sacrificing accuracy on the same/different judgment task. Although a majority of participants were able to direct their PRL toward the initially hidden stimulus, the ability to do so varied between participants. Specifically, six of nine participants made faster saccades with training. A smaller set (four of nine) made accurate saccades inside or close to the target area and retained this strategy 2 to 3 months after training. Subjective reports suggest that training increased awareness of the scotoma location for some individuals. However, training did not transfer to a different visual search task. Nevertheless, our study suggests that increasing scotoma awareness and training participants to look toward their scotoma may help them acquire missing information. PMID:28027382

  11. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid.

    PubMed

    Dawood, Farhan; Loo, Chu Kiong

    2016-01-01

    Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot.

  12. New Multibeam Bathymetry Mosaic at NOAA/NCEI

    NASA Astrophysics Data System (ADS)

    Varner, J. D.; Cartwright, J.; Rosenberg, A. M.; Amante, C.; Sutherland, M.; Jencks, J. H.

    2017-12-01

    NOAA's National Centers for Environmental Information (NCEI) maintains an ever-growing archive of multibeam bathymetric data acquired from U.S. and international government and academic sources. The data are partitioned in the individual survey files in which they were originally received, and are stored in various formats not directly accessible by popular analysis and visualization tools. In order to improve the discoverability and accessibility of the data, NCEI created a new Multibeam Bathymetry Mosaic. Each survey was gridded at 3 arcsecond cell size and organized in an ArcGIS mosaic dataset, which was published as a set of standards-based web services usable in desktop GIS and web clients. In addition to providing a "seamless" grid of all surveys, a filter can be applied to isolate individual surveys. Both depth values in meters and shaded relief visualizations are available. The product represents the current state of the archive; no QA/QC was performed on the data before being incorporated, and the mosaic will be updated incrementally as new surveys are added to the archive. We expect the mosaic will address customer needs for visualization/extraction that existing tools (e.g. NCEI's AutoGrid) are unable to meet, and also assist data managers in identifying problem surveys, missing data, quality control issues, etc. This project complements existing efforts such as the Global Multi-Resolution Topography Data Synthesis (GMRT) at LDEO. Comprehensive visual displays of bathymetric data holdings are invaluable tools for seafloor mapping initiatives, such as Seabed 2030, that will aid in minimizing data collection redundancies and ensuring that valuable data are made available to the broadest community.

  13. EMPIRICAL STUDY ON USABILITY OF CROSSING SUPPORT SYSTEM FOR VISUALLY DISABLED AT SIGNALIZED INTERSECTION

    NASA Astrophysics Data System (ADS)

    Suzuki, Koji; Fujita, Motohiro; Matsuura, Kazuma; Fukuzono, Kazuyuki

    This paper evaluates the adjustment process for crossing support system for visually disabled at signalized intersections with the use of pedestrian traffic signals in concert with visible light communication (VLC) technology through outdoor experiments. As for the experiments, we put a blindfold on sighted people by eye mask in order to analyze the behavior of acquired visually disabled. And we used a full-scale crosswalk which is taking into consideration the crossing slope, the bumps at the edge of a crosswalk between the roadway and the sidewalkand crosswalk line. From the results of the survey, it is found that repetitive use of the VLC system decreased the number of lost their bearings completely and ended up standing immobile and reduced the crossing time for each person. On the other hand, it is shown that the performance of our VLC system is nearly equal to the existing support system from the view point of crossing time and the number of standing immobile and we clarified the effect factor for guidance accuracy by the regression analyses. Then we broke test subjects down into patterns by cluster analysis, and explained the walking characteristics for each group as they used the VLC system. In addition, we conducted the additional surveys for the quasi-blind subjects who had difficulty walking by using VLC system and visually impaired users. As a result, it is revealed that guidance accuracy was improved by providing the information about their receiving movement at several points on crosswalk and the habit of their walks for each user.

  14. Accessing Cloud Properties and Satellite Imagery: A tool for visualization and data mining

    NASA Astrophysics Data System (ADS)

    Chee, T.; Nguyen, L.; Minnis, P.; Spangenberg, D.; Palikonda, R.

    2016-12-01

    Providing public access to imagery of cloud macro and microphysical properties and the underlying satellite imagery is a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a tool and system that allows end users to easily browse cloud information and satellite imagery that is otherwise difficult to acquire and manipulate. The tool has two uses, one to visualize the data and the other to access the data directly. It uses a widely used access protocol, the Open Geospatial Consortium's Web Map and Processing Services, to encourage user to access the data we produce. Internally, we leverage our practical experience with large, scalable application practices to develop a system that has the largest potential for scalability as well as the ability to be deployed on the cloud. One goal of the tool is to provide a demonstration of the back end capability to end users so that they can use the dynamically generated imagery and data as an input to their own work flows or to set up data mining constraints. We build upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product information and satellite imagery accessible and easily searchable. Increasingly, information is used in a "mash-up" form where multiple sources of information are combined to add value to disparate but related information. In support of NASA strategic goals, our group aims to make as much cutting edge scientific knowledge, observations and products available to the citizen science, research and interested communities for these kinds of "mash-ups" as well as provide a means for automated systems to data mine our information. This tool and access method provides a valuable research tool to a wide audience both as a standalone research tool and also as an easily accessed data source that can easily be mined or used with existing tools.

  15. 32 CFR 811.3 - Official requests for visual information productions or materials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... THE AIR FORCE SALES AND SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.3 Official requests for visual information productions or materials. (a) Send official Air Force... 32 National Defense 6 2010-07-01 2010-07-01 false Official requests for visual information...

  16. 32 CFR 811.4 - Selling visual information materials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.4 Selling visual information materials. (a) Air Force VI activities cannot sell materials. (b) HQ AFCIC/ITSM may approve the... 32 National Defense 6 2010-07-01 2010-07-01 false Selling visual information materials. 811.4...

  17. 30 CFR 280.72 - What procedure will MMS follow to disclose acquired data and information to a contractor for...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... acquired data and information to a contractor for reproduction, processing, and interpretation? 280.72... data and information to a contractor for reproduction, processing, and interpretation? (a) When... intent to provide the data or information to an independent contractor or agent for reproduction...

  18. VisGets: coordinated visualizations for web-based information exploration and discovery.

    PubMed

    Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey

    2008-01-01

    In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.

  19. Anatomical and Physiological Characteristics of the Ferret Lateral Rectus Muscle and Abducena Nucleus

    DTIC Science & Technology

    2007-01-25

    concerned with maintaining gaze control and the ability to acquire visual targets (36). A great deal has been written on the physiology of EOM in animal...borrows, the need for rapid nystagmus control is reduced. The ferret eyes are more laterally placed than either cats or monkeys which increases the visual...20. Hein A, Courjon JH, Flandrin JM and Arzi M. Optokinetic nystagmus in the ferret: including selected comparisons with the cat. Exp Brain Res 79

  20. Age-equivalent top-down modulation during cross-modal selective attention.

    PubMed

    Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam

    2014-12-01

    Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.

  1. Constructing and Reading Visual Information: Visual Literacy for Library and Information Science Education

    ERIC Educational Resources Information Center

    Ma, Yan

    2015-01-01

    This article examines visual literacy education and research for library and information science profession to educate the information professionals who will be able to execute and implement the ACRL (Association of College and Research Libraries) Visual Literacy Competency Standards successfully. It is a continuing call for inclusion of visual…

  2. Planning, implementation and optimization of future space missions using an immersive visualization environment (IVE) machine

    NASA Astrophysics Data System (ADS)

    Nathan Harris, E.; Morgenthaler, George W.

    2004-07-01

    Beginning in 1995, a team of 3-D engineering visualization experts assembled at the Lockheed Martin Space Systems Company and began to develop innovative virtual prototyping simulation tools for performing ground processing and real-time visualization of design and planning of aerospace missions. At the University of Colorado, a team of 3-D visualization experts also began developing the science of 3-D visualization and immersive visualization at the newly founded British Petroleum (BP) Center for visualization, which began operations in October, 2001. BP acquired ARCO in the year 2000 and awarded the 3-D flexible IVE developed by ARCO (beginning in 1990) to the University of Colorado, CU, the winner in a competition among 6 Universities. CU then hired Dr. G. Dorn, the leader of the ARCO team as Center Director, and the other experts to apply 3-D immersive visualization to aerospace and to other University Research fields, while continuing research on surface interpretation of seismic data and 3-D volumes. This paper recounts further progress and outlines plans in Aerospace applications at Lockheed Martin and CU.

  3. Using Prosopagnosia to Test and Modify Visual Recognition Theory.

    PubMed

    O'Brien, Alexander M

    2018-02-01

    Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.

  4. [3-dimensional computer animation--a new medium for supporting patient education before surgery. Acceptance and assessment of patients based on a prospective randomized study--picture versus text].

    PubMed

    Hermann, M

    2002-05-01

    The rigorous implementation of clear preoperative information is mandatory for the patient's understanding, acceptance and written informed consent to all diagnostic and surgical procedures. In the present study, I evaluated whether new media are suitable for conveying basic information to patients; I analysed the merits of computerized animation to illustrate a difficult treatment process, i.e., the progressive steps of a thyroid operation, in comparison to the use of conventional flyers. 3D animation software was employed to illustrate the basic anatomy of the thyroid and the larnyx; the principle of thyroidectomy was explained by visualizing the surgical procedure step by step. Finally, the possible complications that may result from the intraoperative manipulations were also visually explained. Eighty patients entered a prospective randomisation: on the day before surgery, group 1 watched the computer animation, whereas group 2 was given the identical information in a written text (= standard flyer). The evaluation included a questionnaire with scores of 1-5, rating the patients' understanding, subjective and objective knowledge, emotional factors like anxiety and trust, and the willingness to undergo an operation. Understanding of and subjective knowledge about the surgical procedure and possible complications, the degree of trust in professional treatment, the reduction in anxiety and readiness for the operation were significantly better after watching the computer animation than after reading the text. However, active knowledge did not improve significantly. The interest in the preoperative information was high in both groups. The benefit of computer animation was enhanced in a second inquiry; patients who had only read the text had a significant improvement in parameters after an additional exposure to the video animation. Preoperative surgical information can be optimized by presenting the operative procedure via computer animation. Nowadays, several types of new media such as the world wide web, CD, DVD, and digital TV are readily available and--as shown here--suitable for effective visual explanation. Most patients are familiar with acquiring new information by one of these means. An appropriately designed 3D repre-sentation is met with a high level of acceptance, as the present study clearly shows. Modern patient-based information systems are necessary. They can no longer be the sole responsibility of the medical profession, but must be on the agenda of hospital managements and of medical care systems as well.

  5. Cross-Over Trial of Gabapentin and Memantine as Treatment for Acquired Nystagmus

    PubMed Central

    Thurtell, Matthew J.; Joshi, Anand C.; Leone, Alice C.; Tomsak, Robert L.; Kosmorsky, Gregory S.; Stahl, John S.; Leigh, R. John

    2010-01-01

    We conducted a masked, cross-over, therapeutic trial of gabapentin (1200mg/day) versus memantine (40mg/day) for acquired nystagmus in 10 patients (28–61 years; 7 female; MS: 3, post-stroke: 6, post-traumatic: 1). Nystagmus was pendular in 6 patients (oculopalatal tremor: 4, MS: 2) and jerk upbeat, hemi-seesaw, torsional, or upbeat-diagonal in each of the others. Both drugs reduced median eye speed (p<0.001), gabapentin by 32.8% and memantine by 27.8%, and improved visual acuity (p<0.05). Each patient improved with one or both drugs. Side-effects included unsteadiness with gabapentin and lethargy with memantine. Both drugs should be considered as treatment for acquired forms of nystagmus. PMID:20437565

  6. Research on robot mobile obstacle avoidance control based on visual information

    NASA Astrophysics Data System (ADS)

    Jin, Jiang

    2018-03-01

    Robots to detect obstacles and control robots to avoid obstacles has been a key research topic of robot control. In this paper, a scheme of visual information acquisition is proposed. By judging visual information, the visual information is transformed into the information source of path processing. In accordance with the established route, in the process of encountering obstacles, the algorithm real-time adjustment trajectory to meet the purpose of intelligent control of mobile robots. Simulation results show that, through the integration of visual sensing information, the obstacle information is fully obtained, while the real-time and accuracy of the robot movement control is guaranteed.

  7. New Information about Albert Einstein's Brain.

    PubMed

    Falk, Dean

    2009-01-01

    In order to glean information about hominin (or other) brains that no longer exist, details of external neuroanatomy that are reproduced on endocranial casts (endocasts) from fossilized braincases may be described and interpreted. Despite being, of necessity, speculative, such studies can be very informative when conducted in light of the literature on comparative neuroanatomy, paleontology, and functional imaging studies. Albert Einstein's brain no longer exists in an intact state, but there are photographs of it in various views. Applying techniques developed from paleoanthropology, previously unrecognized details of external neuroanatomy are identified on these photographs. This information should be of interest to paleoneurologists, comparative neuroanatomists, historians of science, and cognitive neuroscientists. The new identifications of cortical features should also be archived for future scholars who will have access to additional information from improved functional imaging technology. Meanwhile, to the extent possible, Einstein's cerebral cortex is investigated in light of available data about variation in human sulcal patterns. Although much of his cortical surface was unremarkable, regions in and near Einstein's primary somatosensory and motor cortices were unusual. It is possible that these atypical aspects of Einstein's cerebral cortex were related to the difficulty with which he acquired language, his preference for thinking in sensory impressions including visual images rather than words, and his early training on the violin.

  8. PREVAIL: Predicting Recovery through Estimation and Visualization of Active and Incident Lesions.

    PubMed

    Dworkin, Jordan D; Sweeney, Elizabeth M; Schindler, Matthew K; Chahin, Salim; Reich, Daniel S; Shinohara, Russell T

    2016-01-01

    The goal of this study was to develop a model that integrates imaging and clinical information observed at lesion incidence for predicting the recovery of white matter lesions in multiple sclerosis (MS) patients. Demographic, clinical, and magnetic resonance imaging (MRI) data were obtained from 60 subjects with MS as part of a natural history study at the National Institute of Neurological Disorders and Stroke. A total of 401 lesions met the inclusion criteria and were used in the study. Imaging features were extracted from the intensity-normalized T1-weighted (T1w) and T2-weighted sequences as well as magnetization transfer ratio (MTR) sequence acquired at lesion incidence. T1w and MTR signatures were also extracted from images acquired one-year post-incidence. Imaging features were integrated with clinical and demographic data observed at lesion incidence to create statistical prediction models for long-term damage within the lesion. The performance of the T1w and MTR predictions was assessed in two ways: first, the predictive accuracy was measured quantitatively using leave-one-lesion-out cross-validated (CV) mean-squared predictive error. Then, to assess the prediction performance from the perspective of expert clinicians, three board-certified MS clinicians were asked to individually score how similar the CV model-predicted one-year appearance was to the true one-year appearance for a random sample of 100 lesions. The cross-validated root-mean-square predictive error was 0.95 for normalized T1w and 0.064 for MTR, compared to the estimated measurement errors of 0.48 and 0.078 respectively. The three expert raters agreed that T1w and MTR predictions closely resembled the true one-year follow-up appearance of the lesions in both degree and pattern of recovery within lesions. This study demonstrates that by using only information from a single visit at incidence, we can predict how a new lesion will recover using relatively simple statistical techniques. The potential to visualize the likely course of recovery has implications for clinical decision-making, as well as trial enrichment.

  9. Ultrahigh speed Spectral / Fourier domain OCT ophthalmic imaging at 70,000 to 312,500 axial scans per second

    PubMed Central

    Potsaid, Benjamin; Gorczynska, Iwona; Srinivasan, Vivek J.; Chen, Yueli; Jiang, James; Cable, Alex; Fujimoto, James G.

    2009-01-01

    We demonstrate ultrahigh speed spectral / Fourier domain optical coherence tomography (OCT) using an ultrahigh speed CMOS line scan camera at rates of 70,000 - 312,500 axial scans per second. Several design configurations are characterized to illustrate trade-offs between acquisition speed, resolution, imaging range, sensitivity and sensitivity roll-off performance. Ultrahigh resolution OCT with 2.5 - 3.0 micron axial image resolution is demonstrated at ∼ 100,000 axial scans per second. A high resolution spectrometer design improves sensitivity roll-off and imaging range performance, trading off imaging speed to 70,000 axial scans per second. Ultrahigh speed imaging at >300,000 axial scans per second with standard image resolution is also demonstrated. Ophthalmic OCT imaging of the normal human retina is investigated. The high acquisition speeds enable dense raster scanning to acquire densely sampled volumetric three dimensional OCT (3D-OCT) data sets of the macula and optic disc with minimal motion artifacts. Imaging with ∼ 8 - 9 micron axial resolution at 250,000 axial scans per second, a 512 × 512 × 400 voxel volumetric 3D-OCT data set can be acquired in only ∼ 1.3 seconds. Orthogonal registration scans are used to register OCT raster scans and remove residual axial eye motion, resulting in 3D-OCT data sets which preserve retinal topography. Rapid repetitive imaging over small volumes can visualize small retinal features without motion induced distortions and enables volume registration to remove eye motion. Cone photoreceptors in some regions of the retina can be visualized without adaptive optics or active eye tracking. Rapid repetitive imaging of 3D volumes also provides dynamic volumetric information (4D-OCT) which is shown to enhance visualization of retinal capillaries and should enable functional imaging. Improvements in the speed and performance of 3D-OCT volumetric imaging promise to enable earlier diagnosis and improved monitoring of disease progression and response to therapy in ophthalmology, as well as have a wide range of research and clinical applications in other areas. PMID:18795054

  10. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    ERIC Educational Resources Information Center

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search.…

  11. 32 CFR 813.1 - Purpose of the visual information documentation (VIDOC) program.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Purpose of the visual information documentation (VIDOC) program. 813.1 Section 813.1 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE SALES AND SERVICES VISUAL INFORMATION DOCUMENTATION PROGRAM § 813.1 Purpose of the visual information documentation (VIDOC) program....

  12. Emotional Effects in Visual Information Processing

    DTIC Science & Technology

    2009-10-24

    1 Emotional Effects on Visual Information Processing FA4869-08-0004 AOARD 074018 Report October 24, 2009...TITLE AND SUBTITLE Emotional Effects in Visual Information Processing 5a. CONTRACT NUMBER FA48690810004 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...objective of this research project was to investigate how emotion influences visual information processing and the neural correlates of the effects

  13. On compensatory strategies and computational models: the case of pure alexia.

    PubMed

    Shallice, Tim

    2014-01-01

    The article is concerned with inferences from the behaviour of neurological patients to models of normal function. It takes the letter-by-letter reading strategy common in pure alexic patients as an example of the methodological problems involved in making such inferences that compensatory strategies produce. The evidence is discussed on the possible use of three ways the letter-by-letter reading process might operate: "reversed spelling"; the use of the phonological input buffer as a temporary holding store during word building; and the use of serial input to the visual word-form system entirely within the visual-orthographic domain such as in the model of Plaut [1999. A connectionist approach to word reading and acquired dyslexia: Extension to sequential processing. Cognitive Science, 23, 543-568]. The compensatory strategy used by, at least, one pure alexic patient does not fit with the third of these possibilities. On the more general question, it is argued that even if compensatory strategies are being used, the behaviour of neurological patients can be useful for the development and assessment of first-generation information-processing models of normal function, but they are not likely to be useful for the development and assessment of second-generation computational models.

  14. Learning enhances the relative impact of top-down processing in the visual cortex

    PubMed Central

    Makino, Hiroshi; Komiyama, Takaki

    2015-01-01

    Theories have proposed that in sensory cortices learning can enhance top-down modulation by higher brain areas while reducing bottom-up sensory inputs. To address circuit mechanisms underlying this process, we examined the activity of layer 2/3 (L2/3) excitatory neurons in the mouse primary visual cortex (V1) as well as L4 neurons, the main bottom-up source, and long-range top-down projections from the retrosplenial cortex (RSC) during associative learning over days using chronic two-photon calcium imaging. During learning, L4 responses gradually weakened, while RSC inputs became stronger. Furthermore, L2/3 acquired a ramp-up response temporal profile with learning, coinciding with a similar change in RSC inputs. Learning also reduced the activity of somatostatin-expressing inhibitory neurons (SOM-INs) in V1 that could potentially gate top-down inputs. Finally, RSC inactivation or SOM-IN activation was sufficient to partially reverse the learning-induced changes in L2/3. Together, these results reveal a learning-dependent dynamic shift in the balance between bottom-up and top-down information streams and uncover a role of SOM-INs in controlling this process. PMID:26167904

  15. Course for undergraduate students: analysis of the retinal image quality of a human eye model

    NASA Astrophysics Data System (ADS)

    del Mar Pérez, Maria; Yebra, Ana; Fernández-Oliveras, Alicia; Ghinea, Razvan; Ionescu, Ana M.; Cardona, Juan C.

    2014-07-01

    In teaching of Vision Physics or Physiological Optics, the knowledge and analysis of the aberration that the human eye presents are of great interest, since this information allows a proper evaluation of the quality of the retinal image. The objective of the present work is that the students acquire the required competencies which will allow them to evaluate the optical quality of the human visual system for emmetropic and ammetropic eye, both with and without the optical compensation. For this purpose, an optical system corresponding to the Navarro-Escudero eye model, which allows calculating and evaluating the aberration of this eye model in different ammetropic conditions, was developed employing the OSLO LT software. The optical quality of the visual system will be assessed through determinations of the third and fifth order aberration coefficients, the impact diagram, wavefront analysis, calculation of the Point Spread Function and the Modulation Transfer Function for ammetropic individuals, with myopia or hyperopia, both with or without the optical compensation. This course is expected to be of great interest for student of Optics and Optometry Sciences, last courses of Physics or medical sciences related with human vision.

  16. Multi-scale Visualization of Remote Sensing and Topographic Data of the Amazon Rain Forest for Environmental Monitoring of the Petroleum Industry.

    NASA Astrophysics Data System (ADS)

    Fonseca, L.; Miranda, F. P.; Beisl, C. H.; Souza-Fonseca, J.

    2002-12-01

    PETROBRAS (the Brazilian national oil company) built a pipeline to transport crude oil from the Urucu River region to a terminal in the vicinities of Coari, a city located in the right margin of the Solimoes River. The oil is then shipped by tankers to another terminal in Manaus, capital city of the Amazonas state. At the city of Coari, changes in water level between dry and wet seasons reach up to 14 meters. This strong seasonal character of the Amazonian climate gives rise to four distinct scenarios in the annual hydrological cycle: low water, high water, receding water, and rising water. These scenarios constitute the main reference for the definition of oil spill response planning in the region, since flooded forests and flooded vegetation are the most sensitive fluvial environments to oil spills. This study focuses on improving information about oil spill environmental sensitivity in Western Amazon by using 3D visualization techniques to help the analysis and interpretation of remote sensing and digital topographic data, as follows: (a) 1995 low flood and 1996 high flood JERS-1 SAR mosaics, band LHH, 100m pixel; (b) 2000 low flood and 2001 high flood RADARSAT-1 W1 images, band CHH, 30m pixel; (c) 2002 high flood airborne SAR images from the SIVAM project (System for Surveillance of the Amazon), band LHH, 3m pixel and band XHH, 6m pixel; (d) GTOPO30 digital elevation model, 30' resolution; (e) Digital elevation model derived from topographic information acquired during seismic surveys, 25m resolution; (f) panoramic views obtained from low altitude helicopter flights. The methodology applied includes image processing, cartographic conversion and generation of value-added product using 3D visualization. A semivariogram textural classification was applied to the SAR images in order to identify areas of flooded forest and flooded vegetation. The digital elevation models were color shaded to highlight subtle topographic features. Both datasets were then converted to the same cartographic projection and inserted into the Fledermaus 3D visualization environment. 3D visualization proved to be an important aid in understanding the spatial distribution pattern of the environmentally sensitive vegetation cover. The dynamics of the hydrological cycle was depicted in a basin-wide scale, revealing new geomorphic information relevant to assess the environmental risk of oil spills. Results demonstrate that pipelines constitute an environmentally saver option for oil transportation in the region when compared to fluvial tanker routes.

  17. Web-GIS-based SARS epidemic situation visualization

    NASA Astrophysics Data System (ADS)

    Lu, Xiaolin

    2004-03-01

    In order to research, perform statistical analysis and broadcast the information of SARS epidemic situation according to the relevant spatial position, this paper proposed a unified global visualization information platform for SARS epidemic situation based on Web-GIS and scientific virtualization technology. To setup the unified global visual information platform, the architecture of Web-GIS based interoperable information system is adopted to enable public report SARS virus information to health cure center visually by using the web visualization technology. A GIS java applet is used to visualize the relationship between spatial graphical data and virus distribution, and other web based graphics figures such as curves, bars, maps and multi-dimensional figures are used to visualize the relationship between SARS virus tendency with time, patient number or locations. The platform is designed to display the SARS information in real time, simulate visually for real epidemic situation and offer an analyzing tools for health department and the policy-making government department to support the decision-making for preventing against the SARS epidemic virus. It could be used to analyze the virus condition through visualized graphics interface, isolate the areas of virus source, and control the virus condition within shortest time. It could be applied to the visualization field of SARS preventing systems for SARS information broadcasting, data management, statistical analysis, and decision supporting.

  18. Discrimination among Panax species using spectral fingerprinting

    USDA-ARS?s Scientific Manuscript database

    Spectral fingerprints of samples of three Panax species (P. quinquefolius L., P. ginseng, and P. notoginseng) were acquired using UV, NIR, and MS spectrometry. With principal components analysis (PCA), all three methods allowed visual discrimination between all three species. All three methods wer...

  19. Feature-Based Memory-Driven Attentional Capture: Visual Working Memory Content Affects Visual Attention

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…

  20. Interactive visual optimization and analysis for RFID benchmarking.

    PubMed

    Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C

    2009-01-01

    Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.

Top