Sample records for one-time general visual

  1. Visual Analysis among Novices: Training and Trend Lines as Graphic Aids

    ERIC Educational Resources Information Center

    Nelson, Peter M.; Van Norman, Ethan R.; Christ, Theodore J.

    2017-01-01

    The current study evaluated the degree to which novice visual analysts could discern trends in simulated time-series data across differing levels of variability and extreme values. Forty-five novice visual analysts were trained in general principles of visual analysis. One group received brief training on how to identify and omit extreme values.…

  2. 75 FR 71534 - Airworthiness Directives; The Boeing Company Model 737-900ER Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... the products listed above. This AD requires doing a one-time general visual inspection for a keyway in..., contact Boeing Commercial Airplanes, Attention: Data & Services Management, P.O. Box 3707, MC 2H-65... Register on August 10, 2010 (75 FR 48281). That NPRM proposed to require a general visual inspection for a...

  3. 75 FR 48281 - Airworthiness Directives; The Boeing Company Model 737-900ER Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-10

    ... a one-time general visual inspection for a keyway in two fuel tank access door cutouts, and related... proposed AD, contact Boeing Commercial Airplanes, Attention: Data & Services Management, P. O. Box 3707, MC... bulletin describes procedures for a general visual inspection for a keyway in the fuel tank access door...

  4. Choice reaction time to visual motion during prolonged rotary motion in airline pilots

    NASA Technical Reports Server (NTRS)

    Stewart, J. D.; Clark, B.

    1975-01-01

    Thirteen airline pilots were studied to determine the effect of preceding rotary accelerations on the choice reaction time to the horizontal acceleration of a vertical line on a cathode-ray tube. On each trial, one of three levels of rotary and visual acceleration was presented with the rotary stimulus preceding the visual by one of seven periods. The two accelerations were always equal and were presented in the same or opposite directions. The reaction time was found to increase with increases in the time the rotary acceleration preceded the visual acceleration, and to decrease with increased levels of visual and rotary acceleration. The reaction time was found to be shorter when the accelerations were in the same direction than when they were in opposite directions. These results suggest that these findings are a special case of a general effect that the authors have termed 'gyrovisual modulation'.

  5. Choice-reaction time to visual motion with varied levels of simultaneous rotary motion

    NASA Technical Reports Server (NTRS)

    Clark, B.; Stewart, J. D.

    1974-01-01

    Twelve airline pilots were studied to determine the effects of whole-body rotation on choice-reaction time to the horizontal motion of a line on a cathode-ray tube. On each trial, one of five levels of visual acceleration and five corresponding proportions of rotary acceleration were presented simultaneously. Reaction time to the visual motion decreased with increasing levels of visual motion and increased with increasing proportions of rotary acceleration. The results conflict with general theories of facilitation during double stimulation but are consistent with neural-clock model of sensory interaction in choice-reaction time.

  6. Shade determination using camouflaged visual shade guides and an electronic spectrophotometer.

    PubMed

    Kvalheim, S F; Øilo, M

    2014-03-01

    The aim of the present study was to compare a camouflaged visual shade guide to a spectrophotometer designed for restorative dentistry. Two operators performed analyses of 66 subjects. One central upper incisor was measured four times by each operator; twice with a camouflaged visual shade guide and twice with a spectrophotometer Both methods had acceptable repeatability rates, but the electronic shade determination showed higher repeatability. In general, the electronically determined shades were darker than the visually determined shades. The use of a camouflaged visual shade guide seems to be an adequate method to reduce operator bias.

  7. 38 CFR 4.75 - General considerations for evaluating visual impairment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-connected visual impairment of only one eye. Subject to the provisions of 38 CFR 3.383(a), if visual impairment of only one eye is service-connected, the visual acuity of the other eye will be considered to be... visual impairment of one eye. The evaluation for visual impairment of one eye must not exceed 30 percent...

  8. 38 CFR 4.75 - General considerations for evaluating visual impairment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-connected visual impairment of only one eye. Subject to the provisions of 38 CFR 3.383(a), if visual impairment of only one eye is service-connected, the visual acuity of the other eye will be considered to be... visual impairment of one eye. The evaluation for visual impairment of one eye must not exceed 30 percent...

  9. Changes in search rate but not in the dynamics of exogenous attention in action videogame players.

    PubMed

    Hubert-Wallander, Bjorn; Green, C Shawn; Sugarman, Michael; Bavelier, Daphne

    2011-11-01

    Many previous studies have shown that the speed of processing in attentionally demanding tasks seems enhanced following habitual action videogame play. However, using one of the diagnostic tasks for efficiency of attentional processing, a visual search task, Castel and collaborators (Castel, Pratt, & Drummond, Acta Psychologica 119:217-230, 2005) reported no difference in visual search rates, instead proposing that action gaming may change response execution time rather than the efficiency of visual selective attention per se. Here we used two hard visual search tasks, one measuring reaction time and the other accuracy, to test whether visual search rate may be changed by action videogame play. We found greater search rates in the gamer group than in the nongamer controls, consistent with increased efficiency in visual selective attention. We then asked how general the change in attentional throughput noted so far in gamers might be by testing whether exogenous attentional cues would lead to a disproportional enhancement in throughput in gamers as compared to nongamers. Interestingly, exogenous cues were found to enhance throughput equivalently between gamers and nongamers, suggesting that not all mechanisms known to enhance throughput are similarly enhanced in action videogamers.

  10. Proper poster presentation: a visual and verbal ABC.

    PubMed

    Wright, V; Moll, J M

    1987-08-01

    The 58 posters exhibited at the 1985 Annual General Meeting of the British Society for Rheumatology have been analysed for 13 variables considered important in the construction of a good poster. In particular the attributes of information, simplicity and visual attractiveness were studied. The time spent by viewers was also measured for one selected poster each in immunology, biochemistry, therapeutics and clinical medicine. On the basis of this survey, nine recommendations for proper presentation were made.

  11. Comparing capacity coefficient and dual task assessment of visual multitasking workload

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaha, Leslie M.

    Capacity coefficient analysis could offer a theoretically grounded alternative approach to subjective measures and dual task assessment of cognitive workload. Workload capacity or workload efficiency is a human information processing modeling construct defined as the amount of information that can be processed by the visual cognitive system given a specified of amount of time. In this paper, I explore the relationship between capacity coefficient analysis of workload efficiency and dual task response time measures. To capture multitasking performance, I examine how the relatively simple assumptions underlying the capacity construct generalize beyond the single visual decision making tasks. The fundamental toolsmore » for measuring workload efficiency are the integrated hazard and reverse hazard functions of response times, which are defined by log transforms of the response time distribution. These functions are used in the capacity coefficient analysis to provide a functional assessment of the amount of work completed by the cognitive system over the entire range of response times. For the study of visual multitasking, capacity coefficient analysis enables a comparison of visual information throughput as the number of tasks increases from one to two to any number of simultaneous tasks. I illustrate the use of capacity coefficients for visual multitasking on sample data from dynamic multitasking in the modified Multi-attribute Task Battery.« less

  12. 77 FR 24833 - Airworthiness Directives; Airbus Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-26

    ... the hydraulic high pressure hose and electrical wiring of the green electrical motor pump (EMP). This... panel; doing a one-time general visual inspection for correct condition and installation of hydraulic... electrical wiring of the green EMPs, which in combination with a system failure, could cause an uncontrolled...

  13. 75 FR 39189 - Airworthiness Directives; The Boeing Company Model 747 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ... cracking in the body skin and the skin splice plate; for certain airplanes, an inspection for steel cross... inspections for cracking of the bulkhead frame web and body skin; and corrective actions if necessary. This... modification doublers; and, for certain airplanes, and a one-time external general visual inspection for steel...

  14. Animation Strategies for Smooth Transformations Between Discrete Lods of 3d Building Models

    NASA Astrophysics Data System (ADS)

    Kada, Martin; Wichmann, Andreas; Filippovska, Yevgeniya; Hermes, Tobias

    2016-06-01

    The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD) of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.

  15. Visualizing TZVOLCANO GNSS Data with Grafana via the EarthCube Cyberinfrastructure CHORDS: an Example of Dashboard Creation for the Geosciences

    NASA Astrophysics Data System (ADS)

    Nguyen, T. T.; Stamps, D. S.

    2017-12-01

    Visualizing societally relevant data in easy to comprehend formats is necessary for making informed decisions by non-scientist stakeholders. Despite scientists' efforts to inform the public, there continues to be a disconnect in information between stakeholders and scientists. Closing the gap in knowledge requires increased communication between the two groups facilitated by models and data visualizations. In this work we use real-time streaming data from TZVOLCANO, a network of GNSS/GPS sensors that monitor the active volcano Ol Doinyo Lengai in Tanzania, as a test-case for visualizing societally relevant data. Real-time data from TZVOLCANO is streamed into the US NSF Geodesy Facility UNAVCO archive (www.unavco.org) from which data are made available through the EarthCube cyberinfrastructure CHORDS (Cloud-Hosted Real-Time Data Services for the geosciences). CHORDS uses InfluxDB to make streaming data accessible in Grafana: an open source software that specializes in the display of time series analysis. With over 350 downloadable "dashboards", Grafana serves as an emerging software for data visualizations. Creating user-friendly visualizations ("dashboards") for the TZVOLCANO GNSS/GPS data in Tanzania can help scientists and stakeholders communicate effectively so informed decisions can be made about volcanic hazards during a time-sensitive crisis. Our use of Grafana's dashboards for one specific case-study provides an example for other geoscientists to develop analogous visualizations with the objectives of increasing the knowledge of the general public and facilitating a more informed decision-making process.

  16. Supervised spike-timing-dependent plasticity: a spatiotemporal neuronal learning rule for function approximation and decisions.

    PubMed

    Franosch, Jan-Moritz P; Urban, Sebastian; van Hemmen, J Leo

    2013-12-01

    How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as "supervisor." Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.

  17. Change in vision, visual disability, and health after cataract surgery.

    PubMed

    Helbostad, Jorunn L; Oedegaard, Maria; Lamb, Sarah E; Delbaere, Kim; Lord, Stephen R; Sletvold, Olav

    2013-04-01

    Cataract surgery improves vision and visual functioning; the effect on general health is not established. We investigated if vision, visual functioning, and general health follow the same trajectory of change the year after cataract surgery and if changes in vision explain changes in visual disability and general health. One-hundred forty-eight persons, with a mean (SD) age of 78.9 (5.0) years (70% bilateral surgery), were assessed before and 6 weeks and 12 months after surgery. Visual disability and general health were assessed by the CatQuest-9SF and the Short Formular-36. Corrected binocular visual acuity, visual field, stereo acuity, and contrast vision improved (P < 0.001) from before to 6 weeks after surgery, with further improvements of visual acuity evident up to 12 months (P = 0.034). Cataract surgery had an effect on visual disability 1 year later (P < 0.001). Physical and mental health improved after surgery (P < 0.01) but had returned to presurgery level after 12 months. Vision changes did not explain visual disability and general health 6 weeks after surgery. Vision improved and visual disability decreased in the year after surgery, whereas changes in general health and visual functioning were short-term effects. Lack of associations between changes in vision and self-reported disability and general health suggests that the degree of vision changes and self-reported health do not have a linear relationship.

  18. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  19. Visual Attention during Spatial Language Comprehension

    PubMed Central

    Burigo, Michele; Knoeferle, Pia

    2015-01-01

    Spatial terms such as “above”, “in front of”, and “on the left of” are all essential for describing the location of one object relative to another object in everyday communication. Apprehending such spatial relations involves relating linguistic to object representations by means of attention. This requires at least one attentional shift, and models such as the Attentional Vector Sum (AVS) predict the direction of that attention shift, from the sausage to the box for spatial utterances such as “The box is above the sausage”. To the extent that this prediction generalizes to overt gaze shifts, a listener’s visual attention should shift from the sausage to the box. However, listeners tend to rapidly look at referents in their order of mention and even anticipate them based on linguistic cues, a behavior that predicts a converse attentional shift from the box to the sausage. Four eye-tracking experiments assessed the role of overt attention in spatial language comprehension by examining to which extent visual attention is guided by words in the utterance and to which extent it also shifts “against the grain” of the unfolding sentence. The outcome suggests that comprehenders’ visual attention is predominantly guided by their interpretation of the spatial description. Visual shifts against the grain occurred only when comprehenders had some extra time, and their absence did not affect comprehension accuracy. However, the timing of this reverse gaze shift on a trial correlated with that trial’s verification time. Thus, while the timing of these gaze shifts is subtly related to the verification time, their presence is not necessary for successful verification of spatial relations. PMID:25607540

  20. More is still not better: testing the perturbation model of temporal reference memory across different modalities and tasks.

    PubMed

    Ogden, Ruth S; Jones, Luke A

    2009-05-01

    The ability of the perturbation model (Jones & Wearden, 2003) to account for reference memory function in a visual temporal generalization task and auditory and visual reproduction tasks was examined. In all tasks the number of presentations of the standard was manipulated (1, 3, or 5), and its effect on performance was compared. In visual temporal generalization the number of presentations of the standard did not affect the number of times the standard was correctly identified, nor did it affect the overall temporal generalization gradient. In auditory reproduction there was no effect of the number of times the standard was presented on mean reproductions. In visual reproduction mean reproductions were shorter when the standard was only presented once; however, this effect was reduced when a visual cue was provided before the first presentation of the standard. Whilst the results of all experiments are best accounted for by the perturbation model there appears to be some attentional benefit to multiple presentations of the standard in visual reproduction.

  1. Direct Visualization of Valence Electron Motion Using Strong-Field Photoelectron Holography

    NASA Astrophysics Data System (ADS)

    He, Mingrui; Li, Yang; Zhou, Yueming; Li, Min; Cao, Wei; Lu, Peixiang

    2018-03-01

    Watching the valence electron move in molecules on its intrinsic timescale has been one of the central goals of attosecond science and it requires measurements with subatomic spatial and attosecond temporal resolutions. The time-resolved photoelectron holography in strong-field tunneling ionization holds the promise to access this realm. However, it remains to be a challenging task hitherto. Here we reveal how the information of valence electron motion is encoded in the hologram of the photoelectron momentum distribution (PEMD) and develop a novel approach of retrieval. As a demonstration, applying it to the PEMDs obtained by solving the time-dependent Schrödinger equation for the prototypical molecule H2+ , the attosecond charge migration is directly visualized with picometer spatial and attosecond temporal resolutions. Our method represents a general approach for monitoring attosecond charge migration in more complex polyatomic and biological molecules, which is one of the central tasks in the newly emerging attosecond chemistry.

  2. Direct Visualization of Valence Electron Motion Using Strong-Field Photoelectron Holography.

    PubMed

    He, Mingrui; Li, Yang; Zhou, Yueming; Li, Min; Cao, Wei; Lu, Peixiang

    2018-03-30

    Watching the valence electron move in molecules on its intrinsic timescale has been one of the central goals of attosecond science and it requires measurements with subatomic spatial and attosecond temporal resolutions. The time-resolved photoelectron holography in strong-field tunneling ionization holds the promise to access this realm. However, it remains to be a challenging task hitherto. Here we reveal how the information of valence electron motion is encoded in the hologram of the photoelectron momentum distribution (PEMD) and develop a novel approach of retrieval. As a demonstration, applying it to the PEMDs obtained by solving the time-dependent Schrödinger equation for the prototypical molecule H_{2}^{+}, the attosecond charge migration is directly visualized with picometer spatial and attosecond temporal resolutions. Our method represents a general approach for monitoring attosecond charge migration in more complex polyatomic and biological molecules, which is one of the central tasks in the newly emerging attosecond chemistry.

  3. Spoken words can make the invisible visible-Testing the involvement of low-level visual representations in spoken word processing.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-03-01

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Visual Image Sensor Organ Replacement: Implementation

    NASA Technical Reports Server (NTRS)

    Maluf, A. David (Inventor)

    2011-01-01

    Method and system for enhancing or extending visual representation of a selected region of a visual image, where visual representation is interfered with or distorted, by supplementing a visual signal with at least one audio signal having one or more audio signal parameters that represent one or more visual image parameters, such as vertical and/or horizontal location of the region; region brightness; dominant wavelength range of the region; change in a parameter value that characterizes the visual image, with respect to a reference parameter value; and time rate of change in a parameter value that characterizes the visual image. Region dimensions can be changed to emphasize change with time of a visual image parameter.

  5. Neural processing of visual information under interocular suppression: a critical review

    PubMed Central

    Sterzer, Philipp; Stein, Timo; Ludwig, Karin; Rothkirch, Marcus; Hesselmann, Guido

    2014-01-01

    When dissimilar stimuli are presented to the two eyes, only one stimulus dominates at a time while the other stimulus is invisible due to interocular suppression. When both stimuli are equally potent in competing for awareness, perception alternates spontaneously between the two stimuli, a phenomenon called binocular rivalry. However, when one stimulus is much stronger, e.g., due to higher contrast, the weaker stimulus can be suppressed for prolonged periods of time. A technique that has recently become very popular for the investigation of unconscious visual processing is continuous flash suppression (CFS): High-contrast dynamic patterns shown to one eye can render a low-contrast stimulus shown to the other eye invisible for up to minutes. Studies using CFS have produced new insights but also controversies regarding the types of visual information that can be processed unconsciously as well as the neural sites and the relevance of such unconscious processing. Here, we review the current state of knowledge in regard to neural processing of interocularly suppressed information. Focusing on recent neuroimaging findings, we discuss whether and to what degree such suppressed visual information is processed at early and more advanced levels of the visual processing hierarchy. We review controversial findings related to the influence of attention on early visual processing under interocular suppression, the putative differential roles of dorsal and ventral areas in unconscious object processing, and evidence suggesting privileged unconscious processing of emotional and other socially relevant information. On a more general note, we discuss methodological and conceptual issues, from practical issues of how unawareness of a stimulus is assessed to the overarching question of what constitutes an adequate operational definition of unawareness. Finally, we propose approaches for future research to resolve current controversies in this exciting research area. PMID:24904469

  6. [Quality of life in visual impaired children treated for Early Visual Stimulation].

    PubMed

    Messa, Alcione Aparecida; Nakanami, Célia Regina; Lopes, Marcia Caires Bestilleiro

    2012-01-01

    To evaluate the quality of life in visually impaired children followed in the Early Visual Stimulation Ambulatory of Unifesp in two moments, before and after rehabilitational intervention of multiprofessional team. A CVFQ quality of life questionnaire was used. This instrument has a version for less than three years old children and another one for children older than three years (three to seven years) divided in six subscales: General health, General vision health, Competence, Personality, Family impact and Treatment. The correlation between the subscales on two moments was significant. There was a statistically significant difference in general vision health (p=0,029) and other important differences obtained in general health, family impact and quality of life general score. The questionnaire showed to be effective in order to measure the quality of life related to vision on families followed on this ambulatory. The multidisciplinary interventions provided visual function and familiar quality of life improvement. The quality of life related to vision in children followed in Early Visual Stimulation Ambulatory of Unifesp showed a significant improvement on general vision health.

  7. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  8. Audio-Visual Situational Awareness for General Aviation Pilots

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Lodha, Suresh K.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Weather is one of the major causes of general aviation accidents. Researchers are addressing this problem from various perspectives including improving meteorological forecasting techniques, collecting additional weather data automatically via on-board sensors and "flight" modems, and improving weather data dissemination and presentation. We approach the problem from the improved presentation perspective and propose weather visualization and interaction methods tailored for general aviation pilots. Our system, Aviation Weather Data Visualization Environment (AWE), utilizes information visualization techniques, a direct manipulation graphical interface, and a speech-based interface to improve a pilot's situational awareness of relevant weather data. The system design is based on a user study and feedback from pilots.

  9. Visualizing time: how linguistic metaphors are incorporated into displaying instruments in the process of interpreting time-varying signals

    NASA Astrophysics Data System (ADS)

    Garcia-Belmonte, Germà

    2017-06-01

    Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static spatial representations or images. Visualization of time is inherently problematic because time can be conceptualized in terms of two opposite conceptual metaphors based on spatial relations as inferred from conventional linguistic patterns. The situation is particularly demanding when time-varying signals are recorded using displaying electronic instruments, and the image should be properly interpreted. This work deals with the interplay between linguistic metaphors, visual thinking and scientific instrument mediation in the process of interpreting time-varying signals displayed by electronic instruments. The analysis draws on a simplified version of a communication system as example of practical signal recording and image visualization in a physics and engineering laboratory experience. Instrumentation delivers meaningful signal representations because it is designed to incorporate a specific and culturally favored time view. It is suggested that difficulties in interpreting time-varying signals are linked with the existing dual perception of conflicting time metaphors. The activation of specific space-time conceptual mapping might allow for a proper signal interpretation. Instruments play then a central role as visualization mediators by yielding an image that matches specific perception abilities and practical purposes. Here I have identified two ways of understanding time as used in different trajectories through which students are located. Interestingly specific displaying instruments belonging to different cultural traditions incorporate contrasting time views. One of them sees time in terms of a dynamic metaphor consisting of a static observer looking at passing events. This is a general and widespread practice common in the contemporary mass culture, which lies behind the process of making sense to moving images usually visualized by means of movie shots. In contrast scientific culture favored another way of time conceptualization (static time metaphor) that historically fostered the construction of graphs and the incorporation of time-dependent functions, as represented on the Cartesian plane, into displaying instruments. Both types of cultures, scientific and mass, are considered highly technological in the sense that complex instruments, apparatus or machines participate in their visual practices.

  10. Effects of color combination and ambient illumination on visual perception time with TFT-LCD.

    PubMed

    Lin, Chin-Chiuan; Huang, Kuo-Chen

    2009-10-01

    An empirical study was carried out to examine the effects of color combination and ambient illumination on visual perception time using TFT-LCD. The effect of color combination was broken down into two subfactors, luminance contrast ratio and chromaticity contrast. Analysis indicated that the luminance contrast ratio and ambient illumination had significant, though small effects on visual perception. Visual perception time was better at high luminance contrast ratio than at low luminance contrast ratio. Visual perception time under normal ambient illumination was better than at other ambient illumination levels, although the stimulus color had a confounding effect on visual perception time. In general, visual perception time was better for the primary colors than the middle-point colors. Based on the results, normal ambient illumination level and high luminance contrast ratio seemed to be the optimal choice for design of workplace with video display terminals TFT-LCD.

  11. Advanced in Visualization of 3D Time-Dependent CFD Solutions

    NASA Technical Reports Server (NTRS)

    Lane, David A.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.

  12. Influence of Visual Prism Adaptation on Auditory Space Representation.

    PubMed

    Pochopien, Klaudia; Fahle, Manfred

    2017-01-01

    Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.

  13. Improved Discrimination of Visual Stimuli Following Repetitive Transcranial Magnetic Stimulation

    PubMed Central

    Waterston, Michael L.; Pack, Christopher C.

    2010-01-01

    Background Repetitive transcranial magnetic stimulation (rTMS) at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a “virtual lesion” in stimulated brain regions, with correspondingly diminished behavioral performance. Methodology/Principal Findings Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz) stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. Conclusions/Significance Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception. PMID:20442776

  14. One year test-retest reliability of neurocognitive baseline scores in 10- to 12-year olds.

    PubMed

    Moser, Rosemarie Scolaro; Schatz, Philip; Grosner, Emily; Kollias, Kelly

    2017-01-01

    How often youth athletes 10-12 years of age should undergo neurocognitive baseline testing remains an unanswered question. We sought to examine the test-retest reliability of annual ImPACT data in a sample of middle school athletes. Participants were 30 youth athletes, ages 10-12 years (Mean = 11.6, SD = 0.6) selected from a larger database of 10-18 year old athletes, who completed two consecutive annual baseline evaluations using the online version of ImPACT. Athlete assent and parental consent were obtained for all participants. Assessments were conducted either individually or in small groups of 2 to 3 athletes, under the supervision of a neuropsychologist or post-doctoral fellow. Test-retest coefficients were as follows: Verbal Memory .71, Visual Memory .35, Visual Motor Speed .69, Reaction Time .34. Intra-class Correlation Coefficients (single/average) were as follows: Verbal Memory .70/.83, Visual Memory .35/.52, Visual Motor Speed .69/.82, Reaction Time .34/.50. Regression-based measures to correct for practice effects revealed that only a small percentage of cases fell outside 90 and 95% confidence intervals, reflecting stability across assessments. Findings indicate that test-retest reliability of Verbal Memory and Visual Motor Speed are generally stable in 10-12 year old athletes. Nevertheless, Visual Memory Index, Reaction Time Index, and Symptom Checklist scores appear to be less reliable over time, especially compared to published data on high school athletes, suggesting the utility of re-testing on an annual basis in this younger age group.

  15. Pharmacological therapy for amblyopia

    PubMed Central

    Singh, Anupam; Nagpal, Ritu; Mittal, Sanjeev Kumar; Bahuguna, Chirag; Kumar, Prashant

    2017-01-01

    Amblyopia is the most common cause of preventable blindness in children and young adults. Most of the amblyopic visual loss is reversible if detected and treated at appropriate time. It affects 1.0 to 5.0% of the general population. Various treatment modalities have been tried like refractive correction, patching (both full time and part time), penalization and pharmacological therapy. Refractive correction alone improves visual acuity in one third of patients with anisometropic amblyopia. Various drugs have also been tried of which carbidopa & levodopa have been popular. Most of these agents are still in experimental stage, though levodopa-carbidopa combination therapy has been widely studied in human amblyopes with good outcomes. Levodopa therapy may be considered in cases with residual amblyopia, although occlusion therapy remains the initial treatment choice. Regression of effect after stoppage of therapy remains a concern. Further studies are therefore needed to evaluate the full efficacy and side effect profile of these agents. PMID:29018759

  16. Promoting Leisure-Time Physical Activity for Students with Visual Impairments Using Generalization Tactics

    ERIC Educational Resources Information Center

    Haegele, Justin A.

    2015-01-01

    Important and favorable health effects of physical activity have been well documented. Unfortunately, school-aged individuals who are visually impaired tend to be less physically active than their peers without visual impairments. Although students with visual impairments learn skills to participate in physical activities during their PE classes,…

  17. A Parallel Pipelined Renderer for the Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Chiueh, Tzi-Cker; Ma, Kwan-Liu

    1997-01-01

    This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.

  18. Variability in visual working memory ability limits the efficiency of perceptual decision making.

    PubMed

    Ester, Edward F; Ho, Tiffany C; Brown, Scott D; Serences, John T

    2014-04-02

    The ability to make rapid and accurate decisions based on limited sensory information is a critical component of visual cognition. Available evidence suggests that simple perceptual discriminations are based on the accumulation and integration of sensory evidence over time. However, the memory system(s) mediating this accumulation are unclear. One candidate system is working memory (WM), which enables the temporary maintenance of information in a readily accessible state. Here, we show that individual variability in WM capacity is strongly correlated with the speed of evidence accumulation in speeded two-alternative forced choice tasks. This relationship generalized across different decision-making tasks, and could not be easily explained by variability in general arousal or vigilance. Moreover, we show that performing a difficult discrimination task while maintaining a concurrent memory load has a deleterious effect on the latter, suggesting that WM storage and decision making are directly linked.

  19. Frida Kahlo: Visual Articulations of Suffering and Loss.

    ERIC Educational Resources Information Center

    Nixon, Lois LaCivita

    1996-01-01

    Illustrates the value of interdisciplinary approaches to patient care by exploring visual articulations of suffering as rendered by one artist. Makes general observations about the nature of humanities courses offered to medical students and depicts a visual portrayal of an illness story representing personal perspectives about patient suffering…

  20. Interference within the focus of attention: working memory tasks reflect more than temporary maintenance.

    PubMed

    Shipstead, Zach; Engle, Randall W

    2013-01-01

    One approach to understanding working memory (WM) holds that individual differences in WM capacity arise from the amount of information a person can store in WM over short periods of time. This view is especially prevalent in WM research conducted with the visual arrays task. Within this tradition, many researchers have concluded that the average person can maintain approximately 4 items in WM. The present study challenges this interpretation by demonstrating that performance on the visual arrays task is subject to time-related factors that are associated with retrieval from long-term memory. Experiment 1 demonstrates that memory for an array does not decay as a product of absolute time, which is consistent with both maintenance- and retrieval-based explanations of visual arrays performance. Experiment 2 introduced a manipulation of temporal discriminability by varying the relative spacing of trials in time. We found that memory for a target array was significantly influenced by its temporal compression with, or isolation from, a preceding trial. Subsequent experiments extend these effects to sub-capacity set sizes and demonstrate that changes in the size of k are meaningful to prediction of performance on other measures of WM capacity as well as general fluid intelligence. We conclude that performance on the visual arrays task does not reflect a multi-item storage system but instead measures a person's ability to accurately retrieve information in the face of proactive interference.

  1. Perception, Cognition, and Effectiveness of Visualizations with Applications in Science and Engineering

    NASA Astrophysics Data System (ADS)

    Borkin, Michelle A.

    Visualization is a powerful tool for data exploration and analysis. With data ever-increasing in quantity and becoming integrated into our daily lives, having effective visualizations is necessary. But how does one design an effective visualization? To answer this question we need to understand how humans perceive, process, and understand visualizations. Through visualization evaluation studies we can gain deeper insight into the basic perception and cognition theory of visualizations, both through domain-specific case studies as well as generalized laboratory experiments. This dissertation presents the results of four evaluation studies, each of which contributes new knowledge to the theory of perception and cognition of visualizations. The results of these studies include a deeper clearer understanding of how color, data representation dimensionality, spatial layout, and visual complexity affect a visualization's effectiveness, as well as how visualization types and visual attributes affect the memorability of a visualization. We first present the results of two domain-specific case study evaluations. The first study is in the field of biomedicine in which we developed a new heart disease diagnostic tool, and conducted a study to evaluate the effectiveness of 2D versus 3D data representations as well as color maps. In the second study, we developed a new visualization tool for filesystem provenance data with applications in computer science and the sciences more broadly. We additionally developed a new time-based hierarchical node grouping method. We then conducted a study to evaluate the effectiveness of the new tool with its radial layout versus the conventional node-link diagram, and the new node grouping method. Finally, we discuss the results of two generalized studies designed to understand what makes a visualization memorable. In the first evaluation we focused on visualization memorability and conducted an online study using Amazon's Mechanical Turk with hundreds of users and thousands of visualizations. For the second evaluation we designed an eye-tracking laboratory study to gain insight into precisely which elements of a visualization contribute to memorability as well as visualization recognition and recall.

  2. Vision-related quality of life in first stroke patients with homonymous visual field defects

    PubMed Central

    2010-01-01

    Background To evaluate vision-related and health-related quality of life (VRQoL, HRQoL) in first stroke patients with homonymous visual field defects (VFD) with respect to the extent of the lesion. Since VFD occur in approximately 10% of stroke patients the main purpose of the study was to investigate the additional impact of VFD in stroke patients hypothesizing that VFD causes diminished VRQoL. Methods In 177 first stroke patients with persisting VFD 2.5 years after posterior-parietal lesions VRQoL was assessed by the National-Eye-Institute-Visual-Functioning-Questionnaire (NEI-VFQ) and HRQoL by the Medical-Outcome-Study Short-Form-36 Health-Survey (SF-36). Questionnaire results of VFD-patients were compared with age- and sex-matched healthy controls and with general non-selected stroke samples as published elsewhere. VFD-type and visual acuity were partially correlated with questionnaire results. Results Compared to healthy controls VFD-patients had lower NEI-VFQ scores except ocular pain (Z-range -11.34 to -3.35) and lower SF-36 scores except emotional role limitations (Z-range -7.21 to -3.34). VFD-patients were less impaired in SF-36 scores than general stroke patients one month post lesion (6/8 subscales) but had lower SF-36 scores compared to stroke patients six months post lesion (5/8 subscales). Visual acuity significantly correlated with NEI-VFQ scores (r-range 0.27 to 0.48) and VFD-type with SF-36 mental subscales (r-range -0.26 to -0.36). Conclusions VFD-patients showed substantial reductions of VRQoL and HRQoL compared to healthy normals, but better HRQoL compared to stroke patients one month post lesion. VFD-patients (although their lesion age was four times higher) had significantly lower HRQoL than a general stroke population at six months post-stroke. This indicates that the stroke-related subjective level of HRQoL impairment is significantly exacerbated by VFD. While VRQoL was primarily influenced by visual acuity, mental components of HRQoL were influenced by VFD-type with larger VFD being associated with more distress. PMID:20346125

  3. Timing of Visual Bodily Behavior in Repair Sequences: Evidence from Three Languages

    ERIC Educational Resources Information Center

    Floyd, Simeon; Manrique, Elizabeth; Rossi, Giovanni; Torreira, Francisco

    2016-01-01

    This article expands the study of other-initiated repair in conversation--when one party signals a problem with producing or perceiving another's turn at talk--into the domain of visual bodily behavior. It presents one primary cross-linguistic finding about the timing of visual bodily behavior in repair sequences: if the party who initiates repair…

  4. Visual Criterion for Understanding the Notion of Convergence if Integrals in One Parameter

    ERIC Educational Resources Information Center

    Alves, Francisco Regis Vieira

    2014-01-01

    Admittedly, the notion of generalized integrals in one parameter has a fundamental role. En virtue that, in this paper, we discuss and characterize an approach for to promote the visualization of this scientific mathematical concept. We still indicate the possibilities of graphical interpretation of formal properties related to notion of…

  5. Changes in Near Visual Acuity of Over Time in the Astronaut Corps

    NASA Technical Reports Server (NTRS)

    Taiym, Wafa; Wear, Mary L.; Locke, James; Mason, Sara; VanBaalen, Mary

    2014-01-01

    We hypothesized that visual impairment due to intracranial pressure (VIIP) would increase the rate of which presbyopia would occur in the astronaut population, with long durations flyers at an especially high risk. Presbyopia is characterized as the gradual loss of near visual acuity overtime due to a loss in ability to accommodate. It generally develops in the mid-40s and progresses until about age 65. This analysis considered annual vision exams conducted on active NASA astronauts with spaceflight experience currently between the ages of 40 to 60 years of age. Onset of presbyopia was characterized as a shift of at least 20 units on the standard Snellen test from one annual exam to the next. There were 236 short duration and 48 long duration flyers, the majority of whom did experience onset of presbyopia between age 40 and 60. This shift however, did not necessarily come after spaceflight. In comparing the short and long duration flyers the mean age of onset was 47 years old (SD+/-3.7). The mean of onset within the general population is 45 to 47 years old [1, 2]. The mean age of the onset of presbyopia as compared to the general population indicates that space flight does not induce early development of presbyopia.

  6. Streptococcus Endophthalmitis Outbreak after Intravitreal Injection of Bevacizumab: One-year Outcomes and Investigative Results

    PubMed Central

    Goldberg, Roger A.; Flynn, Harry W.; Miller, Darlene; Gonzalez, Serafin; Isom, Ryan F.

    2013-01-01

    Purpose To report the one-year clinical outcomes of an outbreak of Streptococcus endophthalmitis after intravitreal injection of bevacizumab, including visual acuity outcomes, microbiological testing and compound pharmacy investigations by the Food and Drug Administration (FDA). Design Retrospective consecutive case series. Participants 12 eyes of 12 patients who developed endophthalmitis after receiving intravitreal bevacizumab prepared by a single compounding pharmacy. Methods Medical records of patients were reviewed; phenotypic and DNA analyses were performed on microbes cultured from patients and from unused syringes. An inspection report by the FDA based on site-visits to the pharmacy that prepared the bevacizumab syringes was summarized. Main Outcome Measures Visual acuity, interventions received, time-to-intervention; microbiological consistency; FDA inspection findings. Results Between July 5 and July 8, 2011, 12 patients developed endophthalmitis after intravitreal bevacizumab from syringes prepared by a single compounding pharmacy. All patients received initial vitreous tap and injection, and eight (67%) subsequently underwent pars plana vitrectomy (PPV). After twelve months follow-up, outcomes have been poor: 7 patients (58%) required evisceration or enucleation, and only one patient regained pre-injection visual acuity. Molecular testing using real time polymerase chain reaction, partial sequencing of the groEL gene, and multilocus sequencing of 7 housekeeping genes confirmed the presence of a common strain of Streptococcus mitis/oralis in vitreous specimens and seven unused syringes prepared by the compounding pharmacy at the same time. An FDA investigation of the compounding pharmacy noted deviations from standard sterile technique, inconsistent documentation, and inadequate testing of equipment required for safe preparation of medications. Conclusions In this outbreak of endophthalmitis, outcomes have been generally poor and PPV did not improve visual results at one year follow-up. Molecular testing confirmed a common strain of Streptococcus mitis/oralis. Contamination appears to have occurred at the compounding pharmacy, where numerous problems in sterile technique were noted by public health investigators. PMID:23453511

  7. Processing speed in recurrent visual networks correlates with general intelligence.

    PubMed

    Jolij, Jacob; Huisman, Danielle; Scholte, Steven; Hamel, Ronald; Kemner, Chantal; Lamme, Victor A F

    2007-01-08

    Studies on the neural basis of general fluid intelligence strongly suggest that a smarter brain processes information faster. Different brain areas, however, are interconnected by both feedforward and feedback projections. Whether both types of connections or only one of the two types are faster in smarter brains remains unclear. Here we show, by measuring visual evoked potentials during a texture discrimination task, that general fluid intelligence shows a strong correlation with processing speed in recurrent visual networks, while there is no correlation with speed of feedforward connections. The hypothesis that a smarter brain runs faster may need to be refined: a smarter brain's feedback connections run faster.

  8. Performance drifts in two-finger cyclical force production tasks performed by one and two actors.

    PubMed

    Hasanbarani, Fariba; Reschechtko, Sasha; Latash, Mark L

    2018-03-01

    We explored changes in the cyclical two-finger force performance task caused by turning visual feedback off performed either by the index and middle fingers of the dominant hand or by two index fingers of two persons. Based on an earlier study, we expected drifts in finger force amplitude and midpoint without a drift in relative phase. The subjects performed two rhythmical tasks at 1 Hz while paced by an auditory metronome. One of the tasks required cyclical changes in total force magnitude without changes in the sharing of the force between the two fingers. The other task required cyclical changes in the force sharing without changing total force magnitude. Subjects were provided with visual feedback, which showed total force magnitude and force sharing via cursor motion along the vertical and horizontal axes, respectively. Further, visual feedback was turned off, first on the variable that was not required to change and then on both variables. Turning visual feedback off led to a mean force drift toward lower magnitudes while force amplitude increased. There was a consistent drift in the relative phase in the one-hand task with the index finger leading the middle finger. No consistent relative phase drift was seen in the two-person tasks. The shape of the force cycle changed without visual feedback reflected in the lower similarity to a perfect cosine shape and in the higher time spent at lower force magnitudes. The data confirm findings of earlier studies regarding force amplitude and midpoint changes, but falsify predictions of an earlier proposed model with respect to the relative phase changes. We discuss factors that could contribute to the observed relative phase drift in the one-hand tasks including the leader-follower pattern generalized for two-effector tasks performed by one person.

  9. Volumic visual perception: principally novel concept

    NASA Astrophysics Data System (ADS)

    Petrov, Valery

    1996-01-01

    The general concept of volumic view (VV) as a universal property of space is introduced. VV exists in every point of the universe where electromagnetic (EM) waves can reach and a point or a quasi-point receiver (detector) of EM waves can be placed. Classification of receivers is given for the first time. They are classified into three main categories: biological, man-made non-biological, and mathematically specified hypothetical receivers. The principally novel concept of volumic perception is introduced. It differs chiefly from the traditional concept which traces back to Euclid and pre-Euclidean times and much later to Leonardo da Vinci and Giovanni Battista della Porta's discoveries and practical stereoscopy as introduced by C. Wheatstone. The basic idea of novel concept is that humans and animals acquire volumic visual data flows in series rather than in parallel. In this case the brain is free from extremely sophisticated real time parallel processing of two volumic visual data flows in order to combine them. Such procedure seems hardly probable even for humans who are unable to combine two primitive static stereoscopic images in one quicker than in a few seconds. Some people are unable to perform this procedure at all.

  10. [Relationship between employees' management factor of visual display terminal (VDT) work time and 28-item General Health Questionnaire (GHQ-28) at one Japanese IT company's computer worksite].

    PubMed

    Sugimura, Hisamichi; Horiguchi, Itsuko; Shimizu, Takashi; Marui, Eiji

    2007-09-01

    We studied 1365 male workers at a Japanese computer worksite in 2004 to determine the relationship between employees' time management factor of visual display terminal (VDT) work and General Health Questionnaire (GHQ) score. We developed questionnaires concerning age, management factor of VDT work time (total daily VDT work time, duration of continuous work), other work-related conditions (commuting time, job rank, type of job, hours of monthly overtime), lifestyle (smoking, alcohol consumption, exercise, having breakfast, sleeping hours), and the Japanese version of 28-item General Health Questionnaire (GHQ). Multivariate logistic regression analyses were performed to estimate the odds ratios (ORs) of the high-GHQ groups (>6.0) associated with age and the time management factor of VDT work. Multivariate logistic regression analyses indicated lower ORs for certain groups: workers older than 50 years old had significantly a lower OR than those younger than 30 years old; workers sleeping less than 6 h showed a lower OR than those sleeping more than 6 h. In contrast, significantly higher ORs were shown for workers with continuous work durations of more than 3 h compared with those with less than 1 h, those with more than 25 h/mo overtime compared with those with less, those doing VDT work of more than 7.5 h/day compared with those doing less than 4.5 h/day, and those with more than 25 h/mo of overtime compared with those with less. Male Japanese computer workers' GHQ scores are significantly associated with time management factors of VDT work.

  11. The effects of presentation pace and modality on learning a multimedia science lesson

    NASA Astrophysics Data System (ADS)

    Chung, Wen-Hung

    Working memory is a system that consists of multiple components. The visuospatial sketchpad is the main entrance for visual and spatial information, whereas acoustic and verbal information is processed in the phonological loop. The central executive works as a coordinator of information from these two subsystems. Numerous studies have shown that working memory has a very limited capacity. Based on these characteristics of working memory, theories such as cognitive load theory and the cognitive theory of multimedia learning provide multimedia design principles. One of these principles is that when verbal information accompanying pictures is presented in audio mode instead of visually, learning can be more effective than if both text and pictures are presented visually. This is called the modality effect. However, some studies have found that the modality effect does not occur in some situations. In most experiments examining the modality effect, the multimedia is presented as system-paced. If learners are able to repeat listening as many times as they need, the superiority of spoken text over visual text seems lessened. One aim of this study was to examine the modality effect in a learner-controlled condition. This study also used the one-word-at-a-time technique to investigate whether the modality effect would still occur if both reading and listening rates were equal. There were 182 college students recruited for this study. Participants were randomly assigned to seven groups: a self-paced listening group, a self-paced reading group, a self text-block reading group, a general-paced listening group, a general-paced reading group, a fast-paced listening group, and a fast-paced reading group. The experimental material was a cardiovascular multimedia module. A three-by-two between-subjects design was used to test the main effect. Results showed that modality effect was still present but not between the self-paced listening group and the self text-block reading group. A post-study survey showed participants' different responses to the two modalities and their preferences as well. Results and research limitations are discussed and applications and future directions are also addressed.

  12. Two speed factors of visual recognition independently correlated with fluid intelligence.

    PubMed

    Tachibana, Ryosuke; Namba, Yuri; Noguchi, Yasuki

    2014-01-01

    Growing evidence indicates a moderate but significant relationship between processing speed in visuo-cognitive tasks and general intelligence. On the other hand, findings from neuroscience proposed that the primate visual system consists of two major pathways, the ventral pathway for objects recognition and the dorsal pathway for spatial processing and attentive analysis. Previous studies seeking for visuo-cognitive factors of human intelligence indicated a significant correlation between fluid intelligence and the inspection time (IT), an index for a speed of object recognition performed in the ventral pathway. We thus presently examined a possibility that neural processing speed in the dorsal pathway also represented a factor of intelligence. Specifically, we used the mental rotation (MR) task, a popular psychometric measure for mental speed of spatial processing in the dorsal pathway. We found that the speed of MR was significantly correlated with intelligence scores, while it had no correlation with one's IT (recognition speed of visual objects). Our results support the new possibility that intelligence could be explained by two types of mental speed, one related to object recognition (IT) and another for manipulation of mental images (MR).

  13. Direct experimental visualization of the global Hamiltonian progression of two-dimensional Lagrangian flow topologies from integrable to chaotic state.

    PubMed

    Baskan, O; Speetjens, M F M; Metcalfe, G; Clercx, H J H

    2015-10-01

    Countless theoretical/numerical studies on transport and mixing in two-dimensional (2D) unsteady flows lean on the assumption that Hamiltonian mechanisms govern the Lagrangian dynamics of passive tracers. However, experimental studies specifically investigating said mechanisms are rare. Moreover, they typically concern local behavior in specific states (usually far away from the integrable state) and generally expose this indirectly by dye visualization. Laboratory experiments explicitly addressing the global Hamiltonian progression of the Lagrangian flow topology entirely from integrable to chaotic state, i.e., the fundamental route to efficient transport by chaotic advection, appear non-existent. This motivates our study on experimental visualization of this progression by direct measurement of Poincaré sections of passive tracer particles in a representative 2D time-periodic flow. This admits (i) accurate replication of the experimental initial conditions, facilitating true one-to-one comparison of simulated and measured behavior, and (ii) direct experimental investigation of the ensuing Lagrangian dynamics. The analysis reveals a close agreement between computations and observations and thus experimentally validates the full global Hamiltonian progression at a great level of detail.

  14. Using NASA's Giovanni Web Portal to Access and Visualize Satellite-based Earth Science Data in the Classroom

    NASA Technical Reports Server (NTRS)

    Lloyd, Steven; Acker, James G.; Prados, Ana I.; Leptoukh, Gregory G.

    2008-01-01

    One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite-based remote sensing data sets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable data set to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface.

  15. Visual evoked potentials and selective attention to points in space

    NASA Technical Reports Server (NTRS)

    Van Voorhis, S.; Hillyard, S. A.

    1977-01-01

    Visual evoked potentials (VEPs) were recorded to sequences of flashes delivered to the right and left visual fields while subjects responded promptly to designated stimuli in one field at a time (focused attention), in both fields at once (divided attention), or to neither field (passive). Three stimulus schedules were used: the first was a replication of a previous study (Eason, Harter, and White, 1969) where left- and right-field flashes were delivered quasi-independently, while in the other two the flashes were delivered to the two fields in random order (Bernoulli sequence). VEPs to attended-field stimuli were enhanced at both occipital (O2) and central (Cz) recording sites under all stimulus sequences, but different components were affected at the two scalp sites. It was suggested that the VEP at O2 may reflect modality-specific processing events, while the response at Cz, like its auditory homologue, may index more general aspects of selective attention.

  16. Novel Scientific Visualization Interfaces for Interactive Information Visualization and Sharing

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2012-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools in the Iowa Flood Information System (IFIS), developed within the light of these challenges. The IFIS is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS. 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods.

  17. Is More Better? - Night Vision Enhancement System's Pedestrian Warning Modes and Older Drivers.

    PubMed

    Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas

    2010-01-01

    Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers' workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers.

  18. Is More Better? — Night Vision Enhancement System’s Pedestrian Warning Modes and Older Drivers

    PubMed Central

    Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas

    2010-01-01

    Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers’ workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers. PMID:21050616

  19. Visualization of metabolic interaction networks in microbial communities using VisANT 5.0

    DOE PAGES

    Granger, Brian R.; Chang, Yi -Chien; Wang, Yan; ...

    2016-04-15

    Here, the complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique meta-graph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction networkmore » between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues.« less

  20. Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression.

    PubMed

    Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam

    2011-08-03

    Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.

  1. Real-time scalable visual analysis on mobile devices

    NASA Astrophysics Data System (ADS)

    Pattath, Avin; Ebert, David S.; May, Richard A.; Collins, Timothy F.; Pike, William

    2008-02-01

    Interactive visual presentation of information can help an analyst gain faster and better insight from data. When combined with situational or context information, visualization on mobile devices is invaluable to in-field responders and investigators. However, several challenges are posed by the form-factor of mobile devices in developing such systems. In this paper, we classify these challenges into two broad categories - issues in general mobile computing and issues specific to visual analysis on mobile devices. Using NetworkVis and Infostar as example systems, we illustrate some of the techniques that we employed to overcome many of the identified challenges. NetworkVis is an OpenVG-based real-time network monitoring and visualization system developed for Windows Mobile devices. Infostar is a flash-based interactive, real-time visualization application intended to provide attendees access to conference information. Linked time-synchronous visualization, stylus/button-based interactivity, vector graphics, overview-context techniques, details-on-demand and statistical information display are some of the highlights of these applications.

  2. Salary-Trend Studies of Faculty for the Years 1993-94 and 1996-97 in the Following 26 Academic Disciplines/Major Fields: History, General;...Visual and Performing Arts.

    ERIC Educational Resources Information Center

    Howe, Richard D.

    This document provides comparative salary trend data for full-time faculty at 307 public institutions and 490 private colleges and universities based on two surveys, one for the baseline year 1993-94 and the other for the trend year 1996-97. For each of the 26 disciplines, a summary includes a definition of the discipline; information on average…

  3. Visual screening for malignant melanoma: a cost-effectiveness analysis.

    PubMed

    Losina, Elena; Walensky, Rochelle P; Geller, Alan; Beddingfield, Frederick C; Wolf, Lindsey L; Gilchrest, Barbara A; Freedberg, Kenneth A

    2007-01-01

    To evaluate the cost-effectiveness of various melanoma screening strategies proposed in the United States. We developed a computer simulation Markov model to evaluate alternative melanoma screening strategies. Hypothetical cohort of the general population and siblings of patients with melanoma. Intervention We considered the following 4 strategies: background screening only, and screening 1 time, every 2 years, and annually, all beginning at age 50 years. Prevalence, incidence, and mortality data were taken from the Surveillance, Epidemiology, and End Results Program. Sibling risk, recurrence rates, and treatment costs were taken from the literature. Outcomes included life expectancy, quality-adjusted life expectancy, and lifetime costs. Cost-effectiveness ratios were in dollars per quality-adjusted life year (US dollars/QALY) gained. In the general population, screening 1 time, every 2 years, and annually saved 1.6, 4.4, and 5.2 QALYs per 1000 persons screened, with incremental cost-effectiveness ratios of US dollars 10,100/QALY, US dollars 80,700/QALY, and US dollars 586,800/QALY, respectively. In siblings of patients with melanoma (relative risk, 2.24 compared with the general population), 1-time, every-2-years, and annual screenings saved 3.6, 9.8, and 11.4 QALYs per 1000 persons screened, with incremental cost-effectiveness ratios of US dollars 4000/QALY, US dollars 35,500/QALY, and US dollars 257,800/QALY, respectively. In higher risk siblings of patients with melanoma (relative risk, 5.56), screening was more cost-effective. Results were most sensitive to screening cost, melanoma progression rate, and specificity of visual screening. One-time melanoma screening of the general population older than 50 years is very cost-effective compared with other cancer screening programs in the United States. Screening every 2 years in siblings of patients with melanoma is also cost-effective.

  4. A visual ergonomics intervention in mail sorting facilities: effects on eyes, muscles and productivity.

    PubMed

    Hemphälä, Hillevi; Eklund, Jörgen

    2012-01-01

    Visual requirements are high when sorting mail. The purpose of this visual ergonomics intervention study was to evaluate the visual environment in mail sorting facilities and to explore opportunities for improving the work situation by reducing visual strain, improving the visual work environment and reducing mail sorting time. Twenty-seven postmen/women participated in a pre-intervention study, which included questionnaires on their experiences of light, visual ergonomics, health, and musculoskeletal symptoms. Measurements of lighting conditions and productivity were also performed along with eye examinations of the postmen/women. The results from the pre-intervention study showed that the postmen/women who suffered from eyestrain had a higher prevalence of musculoskeletal disorders (MSD) and sorted slower, than those without eyestrain. Illuminance and illuminance uniformity improved as a result of the intervention. The two post-intervention follow-ups showed a higher prevalence of MSD among the postmen/women with eyestrain than among those without. The previous differences in sorting time for employees with and without eyestrain disappeared. After the intervention, the postmen/women felt better in general, experienced less work induced stress, and considered that the total general lighting had improved. The most pronounced decreases in eyestrain, MSD, and mail sorting time were seen among the younger participants of the group. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming

    PubMed Central

    Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy

    2013-01-01

    Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148

  6. Subjective and objective evaluation of visual fatigue on viewing 3D display continuously

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang

    2015-03-01

    In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.

  7. Foundations of Education, Volume I: History and Theory of Teaching Children and Youths with Visual Impairments. Second Edition.

    ERIC Educational Resources Information Center

    Holbrook, M. Cay, Ed.; Koenig, Alan J., Ed.

    This text, one of two volumes on the instruction of students with visual impairments, focuses on the history and theory of teaching such students. The following chapters are included: (1) "Historical Perspectives" (Phil Hatlen) with emphasis on the last 50 years; (2) "Visual Impairment" (Kathleen M. Huebner) which provides general information…

  8. Metadata Mapper: a web service for mapping data between independent visual analysis components, guided by perceptual rules

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Matasci, Naim

    2011-03-01

    The explosion of online scientific data from experiments, simulations, and observations has given rise to an avalanche of algorithmic, visualization and imaging methods. There has also been enormous growth in the introduction of tools that provide interactive interfaces for exploring these data dynamically. Most systems, however, do not support the realtime exploration of patterns and relationships across tools and do not provide guidance on which colors, colormaps or visual metaphors will be most effective. In this paper, we introduce a general architecture for sharing metadata between applications and a "Metadata Mapper" component that allows the analyst to decide how metadata from one component should be represented in another, guided by perceptual rules. This system is designed to support "brushing [1]," in which highlighting a region of interest in one application automatically highlights corresponding values in another, allowing the scientist to develop insights from multiple sources. Our work builds on the component-based iPlant Cyberinfrastructure [2] and provides a general approach to supporting interactive, exploration across independent visualization and visual analysis components.

  9. Progressive 3D shape abstraction via hierarchical CSG tree

    NASA Astrophysics Data System (ADS)

    Chen, Xingyou; Tang, Jin; Li, Chenglong

    2017-06-01

    A constructive solid geometry(CSG) tree model is proposed to progressively abstract 3D geometric shape of general object from 2D image. Unlike conventional ones, our method applies to general object without the need for massive CAD models, and represents the object shapes in a coarse-to-fine manner that allows users to view temporal shape representations at any time. It stands in a transitional position between 2D image feature and CAD model, benefits from state-of-the-art object detection approaches and better initializes CAD model for finer fitting, estimates 3D shape and pose parameters of object at different levels according to visual perception objective, in a coarse-to-fine manner. Two main contributions are the application of CSG building up procedure into visual perception, and the ability of extending object estimation result into a more flexible and expressive model than 2D/3D primitive shapes. Experimental results demonstrate the feasibility and effectiveness of the proposed approach.

  10. Accelerating Demand Paging for Local and Remote Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David

    2001-01-01

    This paper describes a new algorithm that improves the performance of application-controlled demand paging for the out-of-core visualization of data sets that are on either local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The new algorithm can be applied to many different visualization algorithms since application-controlled demand paging is not specific to any visualization algorithm. The paper includes measurements that show that the new multi-threaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by up to 60%. Visualization runs using data from remote disk ran about as fast as ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.

  11. Spatial attention improves reliability of fMRI retinotopic mapping signals in occipital and parietal cortex

    PubMed Central

    Bressler, David W.; Silver, Michael A.

    2010-01-01

    Spatial attention improves visual perception and increases the amplitude of neural responses in visual cortex. In addition, spatial attention tasks and fMRI have been used to discover topographic visual field representations in regions outside visual cortex. We therefore hypothesized that requiring subjects to attend to a retinotopic mapping stimulus would facilitate the characterization of visual field representations in a number of cortical areas. In our study, subjects attended either a central fixation point or a wedge-shaped stimulus that rotated about the fixation point. Response reliability was assessed by computing coherence between the fMRI time series and a sinusoid with the same frequency as the rotating wedge stimulus. When subjects attended to the rotating wedge instead of ignoring it, the reliability of retinotopic mapping signals increased by approximately 50% in early visual cortical areas (V1, V2, V3, V3A/B, V4) and ventral occipital cortex (VO1) and by approximately 75% in lateral occipital (LO1, LO2) and posterior parietal (IPS0, IPS1 and IPS2) cortical areas. Additionally, one 5-minute run of retinotopic mapping in the attention-to-wedge condition produced responses as reliable as the average of three to five (early visual cortex) or more than five (lateral occipital, ventral occipital, and posterior parietal cortex) attention-to-fixation runs. These results demonstrate that allocating attention to the retinotopic mapping stimulus substantially reduces the amount of scanning time needed to determine the visual field representations in occipital and parietal topographic cortical areas. Attention significantly increased response reliability in every cortical area we examined and may therefore be a general mechanism for improving the fidelity of neural representations of sensory stimuli at multiple levels of the cortical processing hierarchy. PMID:20600961

  12. Evaluation of incubation time for dermatophytes cultures.

    PubMed

    Rezusta, Antonio; de la Fuente, Sonia; Gilaberte, Yolanda; Vidal-García, Matxalen; Alcalá, Leticia; López-Calleja, Ana; Ruiz, Maria Angeles; Revillo, Maria José

    2016-07-01

    In general, it is recommended to incubate dermatophytes cultures for a minimum of 4 weeks. Several aspects of routine fungal cultures should be evaluated in order to implement appropriate and necessary changes. The aim of this study was to determine the optimum incubation time for routine dermatophytes cultures, analysing the time to find first fungal growth by visual observation. We recorded the time when the initial growth was detected for all dermatophyte isolates during a 4-year period. A total of 5459 dermatophyte cultures were submitted to our laboratory. From the total cultures, only 16 (1.42%) isolates were recovered over/after 17 days of incubation and only three dermatophyte species were recovered over 17 days. Fourteen isolates belong to Trichophyton rubrum, one isolate to Trichophyton mentagrophytes complex and one isolate to Epidermophyton floccosum. We concluded that an incubation period of 17 days is enough to establish a microbiological diagnosis of dermatophytosis. © 2016 Blackwell Verlag GmbH.

  13. Visual short-term memory always requires general attention.

    PubMed

    Morey, Candice C; Bieler, Malte

    2013-02-01

    The role of attention in visual memory remains controversial; while some evidence has suggested that memory for binding between features demands no more attention than does memory for the same features, other evidence has indicated cognitive costs or mnemonic benefits for explicitly attending to bindings. We attempted to reconcile these findings by examining how memory for binding, for features, and for features during binding is affected by a concurrent attention-demanding task. We demonstrated that performing a concurrent task impairs memory for as few as two visual objects, regardless of whether each object includes one or more features. We argue that this pattern of results reflects an essential role for domain-general attention in visual memory, regardless of the simplicity of the to-be-remembered stimuli. We then discuss the implications of these findings for theories of visual working memory.

  14. Endogenous modulation of human visual cortex activity improves perception at twilight.

    PubMed

    Cordani, Lorenzo; Tagliazucchi, Enzo; Vetter, Céline; Hassemer, Christian; Roenneberg, Till; Stehle, Jörg H; Kell, Christian A

    2018-04-10

    Perception, particularly in the visual domain, is drastically influenced by rhythmic changes in ambient lighting conditions. Anticipation of daylight changes by the circadian system is critical for survival. However, the neural bases of time-of-day-dependent modulation in human perception are not yet understood. We used fMRI to study brain dynamics during resting-state and close-to-threshold visual perception repeatedly at six times of the day. Here we report that resting-state signal variance drops endogenously at times coinciding with dawn and dusk, notably in sensory cortices only. In parallel, perception-related signal variance in visual cortices decreases and correlates negatively with detection performance, identifying an anticipatory mechanism that compensates for the deteriorated visual signal quality at dawn and dusk. Generally, our findings imply that decreases in spontaneous neural activity improve close-to-threshold perception.

  15. One size doesn't fit all: time to revisit patient-reported outcome measures (PROMs) in paediatric ophthalmology?

    PubMed Central

    Tadić, V; Rahi, J S

    2017-01-01

    The purpose of this article is to summarise methodological challenges and opportunities in the development and application of patient reported outcome measures (PROMs) for the rare and complex population of children with visually impairing disorders. Following a literature review on development and application of PROMs in children in general, including those with disabilities and or/chronic condition, we identified and discuss here 5 key issues that are specific to children with visual impairment: (1) the conflation between theoretically distinct vision-related constructs and outcomes, (2) the importance of developmentally appropriate approaches to design and application of PROMs, (3) feasibility of standard questionnaire formats and administration for children with different levels of visual impairment, (4) feasibility and nature of self-reporting by visually impaired children, and (5) epidemiological, statistical and ethical considerations. There is an established need for vision-specific age-appropriate PROMs for use in paediatric ophthalmology, but there are significant practical and methodological challenges in developing and applying appropriate measures. Further understanding of the characteristics and needs of visually impaired children as questionnaire respondents is necessary for development of quality PROMs and their meaningful application in clinical practice and research. PMID:28085146

  16. Direction discriminating hearing aid system

    NASA Technical Reports Server (NTRS)

    Jhabvala, M.; Lin, H. C.; Ward, G.

    1991-01-01

    A visual display was developed for people with substantial hearing loss in either one or both ears. The system consists of three discreet units; an eyeglass assembly for the visual display of the origin or direction of sounds; a stationary general purpose noise alarm; and a noise seeker wand.

  17. Sleep-dependent consolidation benefits fast transfer of time interval training.

    PubMed

    Chen, Lihan; Guo, Lu; Bao, Ming

    2017-03-01

    Previous study has shown that short training (15 min) for explicitly discriminating temporal intervals between two paired auditory beeps, or between two paired tactile taps, can significantly improve observers' ability to classify the perceptual states of visual Ternus apparent motion while the training of task-irrelevant sensory properties did not help to improve visual timing (Chen and Zhou in Exp Brain Res 232(6):1855-1864, 2014). The present study examined the role of 'consolidation' after training of temporal task-irrelevant properties, or whether a pure delay (i.e., blank consolidation) following pretest of the target task would give rise to improved ability of visual interval timing, typified in visual Ternus display. A procedure of pretest-training-posttest was adopted, with the probe of discriminating Ternus apparent motion. The extended implicit training of timing in which the time intervals between paired auditory beeps or paired tactile taps were manipulated but the task was discrimination of the auditory pitches or tactile intensities, did not lead to the training benefits (Exps 1 and 3); however, a delay of 24 h after implicit training of timing, including solving 'Sudoku puzzles,' made the otherwise absent training benefits observable (Exps 2, 4, 5 and 6). The above improvements in performance were not due to a practice effect of Ternus motion (Exp 7). A general 'blank' consolidation period of 24 h also made improvements of visual timing observable (Exp 8). Taken together, the current findings indicated that sleep-dependent consolidation imposed a general effect, by potentially triggering and maintaining neuroplastic changes in the intrinsic (timing) network to enhance the ability of time perception.

  18. A generalized 3D framework for visualization of planetary data.

    NASA Astrophysics Data System (ADS)

    Larsen, K. W.; De Wolfe, A. W.; Putnam, B.; Lindholm, D. M.; Nguyen, D.

    2016-12-01

    As the volume and variety of data returned from planetary exploration missions continues to expand, new tools and technologies are needed to explore the data and answer questions about the formation and evolution of the solar system. We have developed a 3D visualization framework that enables the exploration of planetary data from multiple instruments on the MAVEN mission to Mars. This framework not only provides the opportunity for cross-instrument visualization, but is extended to include model data as well, helping to bridge the gap between theory and observation. This is made possible through the use of new web technologies, namely LATIS, a data server that can stream data and spacecraft ephemerides to a web browser, and Cesium, a Javascript library for 3D globes. The common visualization framework we have developed is flexible and modular so that it can easily be adapted for additional missions. In addition to demonstrating the combined data and modeling capabilities of the system for the MAVEN mission, we will display the first ever near real-time `QuickLook', interactive, 4D data visualization for the Magnetospheric Multiscale Mission (MMS). In this application, data from all four spacecraft can be manipulated and visualized as soon as the data is ingested into the MMS Science Data Center, less than one day after collection.

  19. Visiting the Gödel universe.

    PubMed

    Grave, Frank; Buser, Michael

    2008-01-01

    Visualization of general relativity illustrates aspects of Einstein's insights into the curved nature of space and time to the expert as well as the layperson. One of the most interesting models which came up with Einstein's theory was developed by Kurt Gödel in 1949. The Gödel universe is a valid solution of Einstein's field equations, making it a possible physical description of our universe. It offers remarkable features like the existence of an optical horizon beyond which time travel is possible. Although we know that our universe is not a Gödel universe, it is interesting to visualize physical aspects of a world model resulting from a theory which is highly confirmed in scientific history. Standard techniques to adopt an egocentric point of view in a relativistic world model have shortcomings with respect to the time needed to render an image as well as difficulties in applying a direct illumination model. In this paper we want to face both issues to reduce the gap between common visualization standards and relativistic visualization. We will introduce two techniques to speed up recalculation of images by means of preprocessing and lookup tables and to increase image quality through a special optimization applicable to the Gödel universe. The first technique allows the physicist to understand the different effects of general relativity faster and better by generating images from existing datasets interactively. By using the intrinsic symmetries of Gödel's spacetime which are expressed by the Killing vector field, we are able to reduce the necessary calculations to simple cases using the second technique. This even makes it feasible to account for a direct illumination model during the rendering process. Although the presented methods are applied to Gödel's universe, they can also be extended to other manifolds, for example light propagation in moving dielectric media. Therefore, other areas of research can benefit from these generic improvements.

  20. The processing of auditory and visual recognition of self-stimuli.

    PubMed

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. A randomized controlled trial comparing acetaminophen plus ibuprofen versus acetaminophen plus codeine plus caffeine after outpatient general surgery.

    PubMed

    Mitchell, Alex; van Zanten, Sander Veldhuyzen; Inglis, Karen; Porter, Geoffrey

    2008-03-01

    Narcotics are used extensively in outpatient general surgery but are often poorly tolerated with variable efficacy. Acetaminophen combined with NSAIDs is a possible alternative. The objective of this study was to compare the efficacy of acetaminophen, codeine, and caffeine (Tylenol No. 3) with acetaminophen and ibuprofen for management of pain after outpatient general surgery procedures. A double-blind randomized controlled trial was performed in patients undergoing outpatient inguinal/umbilical/ventral hernia repair or laparoscopic cholecystectomy. Patients were randomized to receive acetaminophen plus codeine plus caffeine (Tylenol No. 3) or acetaminophen plus ibuprofen (AcIBU) 4 times daily for 7 days or until pain-free. Pain intensity, measured four times daily by visual analogue scale, was the primary outcome. Secondary end points included incidence of side effects, patient satisfaction, number of days until patient was pain-free, and use of alternative analgesia. One hundred forty-six patients were randomized (74 Tylenol No. 3 and 72 AcIBU), and 139 (95%) patients completed the study. No significant differences in mean or maximum daily visual analogue scale scores were identified between the 2 groups, except on postoperative day 2, when pain was improved in AcIBU patients (p = 0.025). During the entire week, mean visual analogue scale score was modestly lower in AcIBU patients (p = 0.018). More patients in the AcIBU group, compared with Tylenol No. 3, were satisfied with their analgesia (83% versus 64%, respectively; p = 0.02). There were more side effects with Tylenol No. 3 (57% versus 41%, p = 0.045), and the discontinuation rate was also higher in Tylenol No. 3-treated patients (11% versus 3%, p = 0.044). When compared with Tylenol No. 3, AcIBU was not an inferior analgesic and was associated with fewer side effects and higher patient satisfaction. AcIBU is an effective, low-cost, and safe alternative to codeine-based narcotic analgesia for outpatient general surgery procedures.

  2. Visual impairment in children and adolescents in Norway.

    PubMed

    Haugen, Olav H; Bredrup, Cecilie; Rødahl, Eyvind

    2016-06-01

    BACKGROUND Due to failures in reporting and poor data security, the Norwegian Registry of Blindness was closed down in 1995. Since that time, no registration of visual impairment has taken place in Norway. All the other Nordic countries have registries for children and adolescents with visual impairment. The purpose of this study was to survey visual impairments and their causes in children and adolescents, and to assess the need for an ophthalmic registry.MATERIAL AND METHOD Data were collected via the county teaching centres for the visually impaired in the period from 2005 - 2010 on children and adolescents aged less than 20 years with impaired vision (n = 628). This was conducted as a point prevalence study as of 1 January 2004. Visual function, ophthalmological diagnosis, systemic diagnosis and additional functional impairments were recorded.RESULTS Approximately two-thirds of children and adolescents with visual impairment had reduced vision, while one-third were blind. The three largest diagnostic groups were neuro-ophthalmic diseases (37 %), retinal diseases (19 %) and conditions affecting the eyeball in general (14 %). The prevalence of additional functional impairments was high, at 53 %, most often in the form of motor problems or cognitive impairments.INTERPRETATION The results of the study correspond well with similar investigations in the other Nordic countries. Our study shows that the registries associated with teaching for the visually impaired are inadequate in terms of medical data, and this underlines the need for an ophthalmic registry of children and adolescents with visual impairment.

  3. Why is quality estimation judgment fast? Comparison of gaze control strategies in quality and difference estimation tasks

    NASA Astrophysics Data System (ADS)

    Radun, Jenni; Leisti, Tuomas; Virtanen, Toni; Nyman, Göte; Häkkinen, Jukka

    2014-11-01

    To understand the viewing strategies employed in a quality estimation task, we compared two visual tasks-quality estimation and difference estimation. The estimation was done for a pair of natural images having small global changes in quality. Two groups of observers estimated the same set of images, but with different instructions. One group estimated the difference in quality and the other the difference between image pairs. The results demonstrated the use of different visual strategies in the tasks. The quality estimation was found to include more visual planning during the first fixation than the difference estimation, but afterward needed only a few long fixations on the semantically important areas of the image. The difference estimation used many short fixations. Salient image areas were mainly attended to when these areas were also semantically important. The results support the hypothesis that these tasks' general characteristics (evaluation time, number of fixations, area fixated on) show differences in processing, but also suggest that examining only single fixations when comparing tasks is too narrow a view. When planning a subjective experiment, one must remember that a small change in the instructions might lead to a noticeable change in viewing strategy.

  4. Direct experimental visualization of the global Hamiltonian progression of two-dimensional Lagrangian flow topologies from integrable to chaotic state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baskan, O.; Clercx, H. J. H; Speetjens, M. F. M.

    Countless theoretical/numerical studies on transport and mixing in two-dimensional (2D) unsteady flows lean on the assumption that Hamiltonian mechanisms govern the Lagrangian dynamics of passive tracers. However, experimental studies specifically investigating said mechanisms are rare. Moreover, they typically concern local behavior in specific states (usually far away from the integrable state) and generally expose this indirectly by dye visualization. Laboratory experiments explicitly addressing the global Hamiltonian progression of the Lagrangian flow topology entirely from integrable to chaotic state, i.e., the fundamental route to efficient transport by chaotic advection, appear non-existent. This motivates our study on experimental visualization of this progressionmore » by direct measurement of Poincaré sections of passive tracer particles in a representative 2D time-periodic flow. This admits (i) accurate replication of the experimental initial conditions, facilitating true one-to-one comparison of simulated and measured behavior, and (ii) direct experimental investigation of the ensuing Lagrangian dynamics. The analysis reveals a close agreement between computations and observations and thus experimentally validates the full global Hamiltonian progression at a great level of detail.« less

  5. Including Students with Visual Impairments: Softball

    ERIC Educational Resources Information Center

    Brian, Ali; Haegele, Justin A.

    2014-01-01

    Research has shown that while students with visual impairments are likely to be included in general physical education programs, they may not be as active as their typically developing peers. This article provides ideas for equipment modifications and game-like progressions for one popular physical education unit, softball. The purpose of these…

  6. The science of badminton: game characteristics, anthropometry, physiology, visual fitness and biomechanics.

    PubMed

    Phomsoupha, Michael; Laffaye, Guillaume

    2015-04-01

    Badminton is a racket sport for two or four people, with a temporal structure characterized by actions of short duration and high intensity. This sport has five events: men's and women's singles, men's and women's doubles, and mixed doubles, each requiring specific preparation in terms of technique, control and physical fitness. Badminton is one of the most popular sports in the world, with 200 million adherents. The decision to include badminton in the 1992 Olympics Game increased participation in the game. This review focuses on the game characteristics, anthropometry, physiology, visual attributes and biomechanics of badminton. Players are generally tall and lean, with an ectomesomorphic body type suited to the high physiological demands of a match. Indeed, a typical match characteristic is a rally time of 7 s and a resting time of 15 s, with an effective playing time of 31%. This sport is highly demanding, with an average heart rate (HR) of over 90% of the player's maximal HR. The intermittent actions during a game are demanding on both the aerobic and anaerobic systems: 60-70% on the aerobic system and approximately 30% on the anaerobic system, with greater demand on the alactic metabolism with respect to the lactic anaerobic metabolism. The shuttlecock has an atypical trajectory, and the players perform specific movements such as lunging and jumping, and powerful strokes using a specific pattern of movement. Lastly, badminton players are visually fit, picking up accurate visual information in a short time. Knowledge of badminton can help to improve coaching and badminton skills.

  7. Visual search performance among persons with schizophrenia as a function of target eccentricity.

    PubMed

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2010-03-01

    The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. Copyright 2010 APA, all rights reserved

  8. Hiding the Disk and Network Latency of Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David

    2001-01-01

    This paper describes an algorithm that improves the performance of application-controlled demand paging for out-of-core visualization by hiding the latency of reading data from both local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The paper includes measurements that show that the new multithreaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by two thirds. Visualization runs using data from remote disk actually ran faster than ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.

  9. [The Performance Analysis for Lighting Sources in Highway Tunnel Based on Visual Function].

    PubMed

    Yang, Yong; Han, Wen-yuan; Yan, Ming; Jiang, Hai-feng; Zhu, Li-wei

    2015-10-01

    Under the condition of mesopic vision, the spectral luminous efficiency function is shown as a series of curves. Its peak wavelength and intensity are affected by light spectrum, background brightness and other aspects. The impact of light source to lighting visibility could not be carried out via a single optical parametric characterization. The reaction time of visual cognition is regard as evaluating indexes in this experiment. Under the condition of different speed and luminous environment, testing visual cognition based on vision function method. The light sources include high pressure sodium, electrodeless fluorescent lamp and white LED with three kinds of color temperature (the range of color temperature is from 1 958 to 5 537 K). The background brightness value is used for basic section of highway tunnel illumination and general outdoor illumination, its range is between 1 and 5 cd x m(-)2. All values are in the scope of mesopic vision. Test results show that: under the same condition of speed and luminance, the reaction time of visual cognition that corresponding to high color temperature of light source is shorter than it corresponding to low color temperature; the reaction time corresponding to visual target in high speed is shorter than it in low speed. At the end moment, however, the visual angle of target in observer's visual field that corresponding to low speed was larger than it corresponding to high speed. Based on MOVE model, calculating the equivalent luminance of human mesopic vision, which is on condition of different emission spectrum and background brightness that formed by test lighting sources. Compared with photopic vision result, the standard deviation (CV) of time-reaction curve corresponding to equivalent brightness of mesopic vision is smaller. Under the condition of mesopic vision, the discrepancy between equivalent brightness of different lighting source and photopic vision, that is one of the main reasons for causing the discrepancy of visual recognition. The emission spectrum peak of GaN chip is approximate to the wave length peak of efficiency function in photopic vision. The lighting visual effect of write LED in high color temperature is better than it in low color temperature and electrodeless fluorescent lamp. The lighting visual effect of high pressure sodium is weak. Because of its peak value is around the Na+ characteristic spectra.

  10. Visualization of Metabolic Interaction Networks in Microbial Communities Using VisANT 5.0

    PubMed Central

    Wang, Yan; DeLisi, Charles; Segrè, Daniel; Hu, Zhenjun

    2016-01-01

    The complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT’s unique metagraph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction network between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the “symbiotic layout” of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues. VisANT is freely available at: http://visant.bu.edu and COMETS at http://comets.bu.edu. PMID:27081850

  11. Visualization of Metabolic Interaction Networks in Microbial Communities Using VisANT 5.0.

    PubMed

    Granger, Brian R; Chang, Yi-Chien; Wang, Yan; DeLisi, Charles; Segrè, Daniel; Hu, Zhenjun

    2016-04-01

    The complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique metagraph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction network between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues. VisANT is freely available at: http://visant.bu.edu and COMETS at http://comets.bu.edu.

  12. Comparison of the effects of sedation and general anesthesia in surgically assisted rapid palatal expansion.

    PubMed

    Satilmis, Tulin; Ugurlu, Faysal; Garip, Hasan; Sener, Bedrettin C; Goker, Kamil

    2011-06-01

    To compare the effects of sedation and general anesthesia for surgically assisted rapid palatal expansion (SARPE). This randomized prospective study included 30 patients who were scheduled for SARPE, and was performed between January 2008 to February 2010 in the Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Marmara University, Istanbul, Turkey. Patients were allocated into Group S - midazolam + fentanyl sedation (n=15), and Group G - general anesthesia (n=15). Hemodynamic parameters, duration of anesthesia, surgery, recovery time, time to discharge, visual analogue scale (VAS) pain scores at 30 minutes (min), one hour (hr), 4 hours, 12 hours, and 24 hours, first consumption of analgesic time, total amount of consumption of analgesics, patient and surgeon satisfaction, nausea, and vomiting were recorded. Analgesic time was significantly longer in Group S (p=0.008), and total analgesic consumption was significantly lower in Group S than in Group G (p=0.031). Patient satisfaction was statistically higher in Group S (p=0.035). At 30 min, one hr, and 12 hrs, VAS satisfaction scores in Group S were statistically lower than those in Group G, and at 4 hrs and 24 hrs there was no statistical difference in VAS scores for both groups. The use of sedation for outpatient SARPE resulted in lower pain scores at discharge, lower analgesic consumption, and greater patient satisfaction.

  13. Surfing a spike wave down the ventral stream.

    PubMed

    VanRullen, Rufin; Thorpe, Simon J

    2002-10-01

    Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.

  14. Visualization of Coastal Data Through KML

    NASA Astrophysics Data System (ADS)

    Damsma, T.; Baart, F.; de Boer, G.; van Koningsveld, M.; Bruens, A.

    2009-12-01

    As a country that lies mostly below sea level, the Netherlands has a history of coastal engineering, and is world renowned for its leading role in Integrated Coastal Zone Management (ICZM). Within the framework of Building with Nature (a Dutch ICZM research program) an OPeNDAP server is used to host several datasets of the Dutch coast. Among these sets are bathymetric data, cross-shore profiles, water level time series of which some date back to the eighteenth century. The challenge with hosting this amount of data is more in dissemination and accessibility rather than a technical one (tracing, accessing, gathering, unifying and storing). With so many data in different sets, how can one easily know when and where data is available, and of what quality it is? Recent work using Google Earth as a visual front-end for this database has proven very encouraging. Taking full advantage of the four dimensional (3D+time) visualization capabilities allows researchers, consultants and the general public to view, access and interact with the data. Within MATLAB a set of generic tools are developed for easy creation of among others:

    • A high resolution, time animated, historic bathymetry of the entire Dutch coast.
    • 3D curvilinear computation grids.
    • A 3D contour plot of the Westerschelde Estuary.
    • Time animated wind and water flow fields, both with traditional quiver diagrams and arrows that move with the flow field.
    • Various overviews of markers containing direct web links to data and metadata on OPeNDAP server. Wind field (arrows) and water level elevation for model calculations of Katrina (animated over 14 days) Coastal cross sections (with exaggerated hight) and 2D positions of high and low water lines (animated over 40 years)

    • Visual imagery in autobiographical memory: The role of repeated retrieval in shifting perspective

      PubMed Central

      Butler, Andrew C.; Rice, Heather J.; Wooldridge, Cynthia L.; Rubin, David C.

      2016-01-01

      Recent memories are generally recalled from a first-person perspective whereas older memories are often recalled from a third-person perspective. We investigated how repeated retrieval affects the availability of visual information, and whether it could explain the observed shift in perspective with time. In Experiment 1, participants performed mini-events and nominated memories of recent autobiographical events in response to cue words. Next, they described their memory for each event and rated its phenomenological characteristics. Over the following three weeks, they repeatedly retrieved half of the mini-event and cue-word memories. No instructions were given about how to retrieve the memories. In Experiment 2, participants were asked to adopt either a first- or third-person perspective during retrieval. One month later, participants retrieved all of the memories and again provided phenomenology ratings. When first-person visual details from the event were repeatedly retrieved, this information was retained better and the shift in perspective was slowed. PMID:27064539

    • Visualizing Sound: Demonstrations to Teach Acoustic Concepts

      NASA Astrophysics Data System (ADS)

      Rennoll, Valerie

      Interference, a phenomenon in which two sound waves superpose to form a resultant wave of greater or lower amplitude, is a key concept when learning about the physics of sound waves. Typical interference demonstrations involve students listening for changes in sound level as they move throughout a room. Here, new tools are developed to teach this concept that provide a visual component, allowing individuals to see changes in sound level on a light display. This is accomplished using a microcontroller that analyzes sound levels collected by a microphone and displays the sound level in real-time on an LED strip. The light display is placed on a sliding rail between two speakers to show the interference occurring between two sound waves. When a long-exposure photograph is taken of the light display being slid from one end of the rail to the other, a wave of the interference pattern can be captured. By providing a visual component, these tools will help students and the general public to better understand interference, a key concept in acoustics.

    • Visualizing Mobility of Public Transportation System.

      PubMed

      Zeng, Wei; Fu, Chi-Wing; Arisona, Stefan Müller; Erath, Alexander; Qu, Huamin

      2014-12-01

      Public transportation systems (PTSs) play an important role in modern cities, providing shared/massive transportation services that are essential for the general public. However, due to their increasing complexity, designing effective methods to visualize and explore PTS is highly challenging. Most existing techniques employ network visualization methods and focus on showing the network topology across stops while ignoring various mobility-related factors such as riding time, transfer time, waiting time, and round-the-clock patterns. This work aims to visualize and explore passenger mobility in a PTS with a family of analytical tasks based on inputs from transportation researchers. After exploring different design alternatives, we come up with an integrated solution with three visualization modules: isochrone map view for geographical information, isotime flow map view for effective temporal information comparison and manipulation, and OD-pair journey view for detailed visual analysis of mobility factors along routes between specific origin-destination pairs. The isotime flow map linearizes a flow map into a parallel isoline representation, maximizing the visualization of mobility information along the horizontal time axis while presenting clear and smooth pathways from origin to destinations. Moreover, we devise several interactive visual query methods for users to easily explore the dynamics of PTS mobility over space and time. Lastly, we also construct a PTS mobility model from millions of real passenger trajectories, and evaluate our visualization techniques with assorted case studies with the transportation researchers.

    • Do we need more famous fluid dynamicists?

      NASA Astrophysics Data System (ADS)

      Reckinger, Shanon; Brinkman, Bethany; Fenner, Raenita; London, Mara

      2015-11-01

      One of the main reasons students do not join the STEM fields is that they lack interest in technical topics. But do people (young students, the general public, or even our own engineering students) know what an engineer is and/or does? In this talk, results from a recent study on the perceptions of different professions will be presented. The study was designed based off of ``draw-an-engineer'' and ``draw-a-scientist'' tests used on elementary schools kids. The idea is to have participants visualize professionals (engineers, lawyers, and medical doctors were chosen for this study), and determine if there are any patterns within different demographic groups. The demographics that were focused on include gender, race, age, college major, highest level of education, and profession. One of the main findings of this survey was that participants had the most difficult time visualizing an engineer compared to a lawyer or a medical doctor. Therefore, maybe we need more famous engineers (and fluid dynamicists)?

    • Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

      DOE PAGES

      Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...

      2017-08-29

      Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less

    • Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

      DOE Office of Scientific and Technical Information (OSTI.GOV)

      Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.

      Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less

  1. Repetition blindness and illusory conjunctions: errors in binding visual types with visual tokens.

    PubMed

    Kanwisher, N

    1991-05-01

    Repetition blindness (Kanwisher, 1986, 1987) has been defined as the failure to detect or recall repetitions of words presented in rapid serial visual presentation (RSVP). The experiments presented here suggest that repetition blindness (RB) is a more general visual phenomenon, and examine its relationship to feature integration theory (Treisman & Gelade, 1980). Experiment 1 shows RB for letters distributed through space, time, or both. Experiment 2 demonstrates RB for repeated colors in RSVP lists. In Experiments 3 and 4, RB was found for repeated letters and colors in spatial arrays. Experiment 5 provides evidence that the mental representations of discrete objects (called "visual tokens" here) that are necessary to detect visual repetitions (Kanwisher, 1987) are the same as the "object files" (Kahneman & Treisman, 1984) in which visual features are conjoined. In Experiment 6, repetition blindness for the second occurrence of a repeated letter resulted only when the first occurrence was attended to. The overall results suggest that a general dissociation between types and tokens in visual information processing can account for both repetition blindness and illusory conjunctions.

  2. Multivariable manual control with simultaneous visual and auditory presentation of information. [for improved compensatory tracking performance of human operator

    NASA Technical Reports Server (NTRS)

    Uhlemann, H.; Geiser, G.

    1975-01-01

    Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.

  3. Young children's coding and storage of visual and verbal material.

    PubMed

    Perlmutter, M; Myers, N A

    1975-03-01

    36 preschool children (mean age 4.2 years) were each tested on 3 recognition memory lists differing in test mode (visual only, verbal only, combined visual-verbal). For one-third of the children, original list presentation was visual only, for another third, presentation was verbal only, and the final third received combined visual-verbal presentation. The subjects generally performed at a high level of correct responding. Verbal-only presentation resulted in less correct recognition than did either visual-only or combined visual-verbal presentation. However, because performances under both visual-only and combined visual-verbal presentation were statistically comparable, and a high level of spontaneous labeling was observed when items were presented only visually, a dual-processing conceptualization of memory in 4-year-olds was suggested.

  4. Longitudinal decrease in blood oxygenation level dependent response in cerebral amyloid angiopathy.

    PubMed

    Switzer, Aaron R; McCreary, Cheryl; Batool, Saima; Stafford, Randall B; Frayne, Richard; Goodyear, Bradley G; Smith, Eric E

    2016-01-01

    Lower blood oxygenation level dependent (BOLD) signal changes in response to a visual stimulus in functional magnetic resonance imaging (fMRI) have been observed in cross-sectional studies of cerebral amyloid angiopathy (CAA), and are presumed to reflect impaired vascular reactivity. We used fMRI to detect a longitudinal change in BOLD responses to a visual stimulus in CAA, and to determine any correlations between these changes and other established biomarkers of CAA progression. Data were acquired from 22 patients diagnosed with probable CAA (using the Boston Criteria) and 16 healthy controls at baseline and one year. BOLD data were generated from the 200 most active voxels of the primary visual cortex during the fMRI visual stimulus (passively viewing an alternating checkerboard pattern). In general, BOLD amplitudes were lower at one year compared to baseline in patients with CAA (p = 0.01) but were unchanged in controls (p = 0.18). The longitudinal difference in BOLD amplitudes was significantly lower in CAA compared to controls (p < 0.001). White matter hyperintensity (WMH) volumes and number of cerebral microbleeds, both presumed to reflect CAA-mediated vascular injury, increased over time in CAA (p = 0.007 and p = 0.001, respectively). Longitudinal increases in WMH (rs = 0.04, p = 0.86) or cerebral microbleeds (rs = -0.18, p = 0.45) were not associated with the longitudinal decrease in BOLD amplitudes.

  5. Iowa Flood Information System: Towards Integrated Data Management, Analysis and Visualization

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.; Goska, R.; Mantilla, R.; Weber, L. J.; Young, N.

    2012-04-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 500 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview and live demonstration of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.

  6. Reverse phase protein arrays in signaling pathways: a data integration perspective

    PubMed Central

    Creighton, Chad J; Huang, Shixia

    2015-01-01

    The reverse phase protein array (RPPA) data platform provides expression data for a prespecified set of proteins, across a set of tissue or cell line samples. Being able to measure either total proteins or posttranslationally modified proteins, even ones present at lower abundances, RPPA represents an excellent way to capture the state of key signaling transduction pathways in normal or diseased cells. RPPA data can be combined with those of other molecular profiling platforms, in order to obtain a more complete molecular picture of the cell. This review offers perspective on the use of RPPA as a component of integrative molecular analysis, using recent case examples from The Cancer Genome Altas consortium, showing how RPPA may provide additional insight into cancer besides what other data platforms may provide. There also exists a clear need for effective visualization approaches to RPPA-based proteomic results; this was highlighted by the recent challenge, put forth by the HPN-DREAM consortium, to develop visualization methods for a highly complex RPPA dataset involving many cancer cell lines, stimuli, and inhibitors applied over time course. In this review, we put forth a number of general guidelines for effective visualization of complex molecular datasets, namely, showing the data, ordering data elements deliberately, enabling generalization, focusing on relevant specifics, and putting things into context. We give examples of how these principles can be utilized in visualizing the intrinsic subtypes of breast cancer and in meaningfully displaying the entire HPN-DREAM RPPA dataset within a single page. PMID:26185419

  7. Terminology model discovery using natural language processing and visualization techniques.

    PubMed

    Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol

    2006-12-01

    Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.

  8. Labor Force Participation Rates among Working-Age Individuals with Visual Impairments

    ERIC Educational Resources Information Center

    Kelly, Stacy M.

    2013-01-01

    The present study analyzes four consecutive years of monthly labor force participation rates reported by the Current Population Survey that included nationally representative samples of the general U.S. population and nationally representative samples of the U.S. population with specifically identified disabilities. Visual impairment is one of the…

  9. From UNIX to PC via X-Windows: Molecular Modeling for the General Chemistry Lab

    NASA Astrophysics Data System (ADS)

    Pavia, Donald; Wicholas, Mark

    1997-04-01

    The emphasis of molecular modeling in the undergraduate curriculum has generally been directed toward sophomore organic and higher-level chemistry instruction, especially when UNIX systems are used. When developing plans for incorporating molecular modeling into the curriculum, we decided to also include it in our first-year general chemistry course. Modeling would serve primarily as a visualization tool to augment the general chemistry coverage of bonding and structure. Our first thoughts were rather naive: we would set up a number of workstations and somehow get our general chemistry students, as many as 480 in one academic quarter, directly onto these machines at some time in a 1-2 week period during their weekly 3-hour lab. Further exploration of our options revealed that a better approach was to use PCs as dummy terminals for UNIX workstations. Described below are the hardware and software for this venture and the modeling experiment done by our students in general chemistry.

  10. The Processing of Visual and Phonological Configurations of Chinese One- and Two-Character Words in a Priming Task of Semantic Categorization.

    PubMed

    Ma, Bosen; Wang, Xiaoyun; Li, Degao

    2015-01-01

    To separate the contribution of phonological from that of visual-orthographic information in the recognition of a Chinese word that is composed of one or two Chinese characters, we conducted two experiments in a priming task of semantic categorization (PTSC), in which length (one- or two-character words), relation, prime (related or unrelated prime-target pairs), and SOA (47, 87, or 187 ms) were manipulated. The prime was similar to the target in meaning or in visual configuration in Experiment A and in meaning or in pronunciation in Experiment B. The results indicate that the two-character words were similar to the one-character words but were less demanding of cognitive resources than the one-character words in the processing of phonological, visual-orthographic, and semantic information. The phonological primes had a facilitating effect at the SOA of 47 ms but an inhibitory effect at the SOA of 187 ms on the participants' reaction times; the visual-orthographic primes only had an inhibitory influence on the participants' reaction times at the SOA of 187 ms. The visual configuration of a Chinese word of one or two Chinese characters has its own contribution in helping retrieve the word's meanings; similarly, the phonological configuration of a one- or two-character word plays its own role in triggering activations of the word's semantic representations.

  11. Driving with indirect viewing sensors: understanding the visual perception issues

    NASA Astrophysics Data System (ADS)

    O'Kane, Barbara L.

    1996-05-01

    Visual perception is one of the most important elements of driving in that it enables the driver to understand and react appropriately to the situation along the path of the vehicle. The visual perception of the driver is enabled to the greatest extent while driving during the day. Noticeable decrements in visual acuity, range of vision, depth of field and color perception occur at night and under certain weather conditions. Indirect viewing sensors, utilizing various technologies and spectral bands, may assist the driver's normal mode of driving. Critical applications in the military as well as other official activities may require driving at night without headlights. In these latter cases, it is critical that the device, being the only source of scene information, provide the required scene cues needed for driving on, and often-times, off road. One can speculate about the scene information that a driver needs, such as road edges, terrain orientation, people and object detection in or near the path of the vehicle, and so on. But the perceptual qualities of the scene that give rise to these perceptions are little known and thus not quantified for evaluation of indirect viewing devices. This paper discusses driving with headlights and compares the scene content with that provided by a thermal system in the 8 - 12 micrometers micron spectral band, which may be used for driving at some time. The benefits and advantages of each are discussed as well as their limitations in providing information useful for the driver who must make rapid and critical decisions based upon the scene content available. General recommendations are made for potential avenues of development to overcome some of these limitations.

  12. Left Gastric Vein Visualization with Hepatopetal Flow Information in Healthy Subjects Using Non-Contrast-Enhanced Magnetic Resonance Angiography with Balanced Steady-State Free-Precession Sequence and Time-Spatial Labeling Inversion Pulse.

    PubMed

    Furuta, Akihiro; Isoda, Hiroyoshi; Ohno, Tsuyoshi; Ono, Ayako; Yamashita, Rikiya; Arizono, Shigeki; Kido, Aki; Sakashita, Naotaka; Togashi, Kaori

    2018-01-01

    To selectively visualize the left gastric vein (LGV) with hepatopetal flow information by non-contrast-enhanced magnetic resonance angiography under a hypothesis that change in the LGV flow direction can predict the development of esophageal varices; and to optimize the acquisition protocol in healthy subjects. Respiratory-gated three-dimensional balanced steady-state free-precession scans were conducted on 31 healthy subjects using two methods (A and B) for visualizing the LGV with hepatopetal flow. In method A, two time-spatial labeling inversion pulses (Time-SLIP) were placed on the whole abdomen and the area from the gastric fornix to the upper body, excluding the LGV area. In method B, nonselective inversion recovery pulse was used and one Time-SLIP was placed on the esophagogastric junction. The detectability and consistency of LGV were evaluated using the two methods and ultrasonography (US). Left gastric veins by method A, B, and US were detected in 30 (97%), 24 (77%), and 23 (74%) subjects, respectively. LGV flow by US was hepatopetal in 22 subjects and stagnant in one subject. All hepatopetal LGVs by US coincided with the visualized vessels in both methods. One subject with non-visualized LGV in method A showed stagnant LGV by US. Hepatopetal LGV could be selectively visualized by method A in healthy subjects.

  13. Entrainment to an auditory signal: Is attention involved?

    PubMed

    Kunert, Richard; Jongman, Suzanne R

    2017-01-01

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Visual memory, the long and the short of it: A review of visual working memory and long-term memory.

    PubMed

    Schurgin, Mark W

    2018-04-23

    The majority of research on visual memory has taken a compartmentalized approach, focusing exclusively on memory over shorter or longer durations, that is, visual working memory (VWM) or visual episodic long-term memory (VLTM), respectively. This tutorial provides a review spanning the two areas, with readers in mind who may only be familiar with one or the other. The review is divided into six sections. It starts by distinguishing VWM and VLTM from one another, in terms of how they are generally defined and their relative functions. This is followed by a review of the major theories and methods guiding VLTM and VWM research. The final section is devoted toward identifying points of overlap and distinction across the two literatures to provide a synthesis that will inform future research in both fields. By more intimately relating methods and theories from VWM and VLTM to one another, new advances can be made that may shed light on the kinds of representational content and structure supporting human visual memory.

  15. Musician Map: visualizing music collaborations over time

    NASA Astrophysics Data System (ADS)

    Yim, Ji-Dong; Shaw, Chris D.; Bartram, Lyn

    2009-01-01

    In this paper we introduce Musician Map, a web-based interactive tool for visualizing relationships among popular musicians who have released recordings since 1950. Musician Map accepts search terms from the user, and in turn uses these terms to retrieve data from MusicBrainz.org and AudioScrobbler.net, and visualizes the results. Musician Map visualizes relationships of various kinds between music groups and individual musicians, such as band membership, musical collaborations, and linkage to other artists that are generally regarded as being similar in musical style. These relationships are plotted between artists using a new timeline-based visualization where a node in a traditional node-link diagram has been transformed into a Timeline-Node, which allows the visualization of an evolving entity over time, such as the membership in a band. This allows the user to pursue social trend queries such as "Do Hip-Hop artists collaborate differently than Rock artists".

  16. Timing Is Everything: One Teacher's Exploration of the Best Time to Use Visual Media in a Science Unit

    ERIC Educational Resources Information Center

    Drury, Debra

    2006-01-01

    Kids today are growing up with televisions, movies, videos and DVDs, so it's logical to assume that this type of media could be motivating and used to great effect in the classroom. But at what point should film and other visual media be used? Are there times in the inquiry process when showing a film or incorporating other visual media is more…

  17. Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Alan E.; Crow, Vernon L.; Payne, Deborah A.

    Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a data visualization method includes accessing a plurality of initial documents at a first moment in time, first processing the initial documents providing processed initial documents, first identifying a plurality of first associations of the initial documents using the processed initial documents, generating a first visualization depicting the first associations, accessing a plurality of additional documents at a second moment in time after the first moment in time, second processing the additional documents providing processed additional documents, secondmore » identifying a plurality of second associations of the additional documents and at least some of the initial documents, wherein the second identifying comprises identifying using the processed initial documents and the processed additional documents, and generating a second visualization depicting the second associations.« less

  18. Visual perception and interception of falling objects: a review of evidence for an internal model of gravity.

    PubMed

    Zago, Myrka; Lacquaniti, Francesco

    2005-09-01

    Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. However, there are limitations in the visual system that raise questions about the general validity of these theories. Most notably, vision is poorly sensitive to arbitrary accelerations. How then does the brain deal with the motion of objects accelerated by Earth's gravity? Here we review evidence in favor of the view that the brain makes the best estimate about target motion based on visually measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from the expected kinetics in the Earth's gravitational field.

  19. The Eye Phone Study: reliability and accuracy of assessing Snellen visual acuity using smartphone technology

    PubMed Central

    Perera, C; Chakrabarti, R; Islam, F M A; Crowston, J

    2015-01-01

    Purpose Smartphone-based Snellen visual acuity charts has become popularized; however, their accuracy has not been established. This study aimed to evaluate the equivalence of a smartphone-based visual acuity chart with a standard 6-m Snellen visual acuity (6SVA) chart. Methods First, a review of available Snellen chart applications on iPhone was performed to determine the most accurate application based on optotype size. Subsequently, a prospective comparative study was performed by measuring conventional 6SVA and then iPhone visual acuity using the ‘Snellen' application on an Apple iPhone 4. Results Eleven applications were identified, with accuracy of optotype size ranging from 4.4–39.9%. Eighty-eight patients from general medical and surgical wards in a tertiary hospital took part in the second part of the study. The mean difference in logMAR visual acuity between the two charts was 0.02 logMAR (95% limit of agreement −0.332, 0.372 logMAR). The largest mean difference in logMAR acuity was noted in the subgroup of patients with 6SVA worse than 6/18 (n=5), who had a mean difference of two Snellen visual acuity lines between the charts (0.276 logMAR). Conclusion We did not identify a Snellen visual acuity app at the time of study, which could predict a patients standard Snellen visual acuity within one line. There was considerable variability in the optotype accuracy of apps. Further validation is required for assessment of acuity in patients with severe vision impairment. PMID:25931170

  20. Leveraging Earth and Planetary Datasets to Support Student Investigations in an Introductory Geoscience Course

    NASA Astrophysics Data System (ADS)

    Ryan, Jeffrey; De Paor, Declan

    2016-04-01

    Engaging undergraduates in discovery-based research during their first two years of college was a listed priority in the 2012 Report of the USA President's Council of Advisors on Science and Technology (PCAST), and has been the focus of events and publications sponsored by the National Academies (NAS, 2015). Challenges faced in moving undergraduate courses and curricula in this direction are the paired questions of how to effectively provide such experiences to large numbers of students, and how to do so in ways that are cost- and time-effiicient for institutions and instructional faculty. In the geosciences, free access to of a growing number of global earth and planetary data resources and associated visualization tools permits one to build into introductory-level courses straightforward data interrogation and analysis activities that provide students with valuable experiences with the compilation and critical investigation of earth and planetary data. Google Earth provides global Earth and planetary imagery databases that span large ranges in resolution and in time, permitting easy examination of earth surface features and surface features on Mars or the Moon. As well, "community" data sources (i.e., Gigapan photographic collections and 3D visualizations of geologic features, as are supported by the NSF GEODE project) allow for intensive interrogation of specific geologic phenomena. Google Earth Engine provides access to rich satellite-based earth observation data, supporting studies of weather and related student efforts. GeoMapApp, the freely available visualization tool of the Interdisciplinary Earth Data Alliance (IEDA), permits examination of the seafloor and the integration of a range of third-party data. The "Earth" meteorological website (earth.nullschool.net) provides near real-time visualization of global weather and oceanic conditions, which in combination with weather option data from Google Earth permits a deeper interrogation of atmospheric conditions. In combination, these freely accessible data resources permit one to transform general- audience geoscience courses into extended investigations, in which students discover key information about the workings of our planet.

  1. Research progress on Drosophila visual cognition in China.

    PubMed

    Guo, AiKe; Zhang, Ke; Peng, YueQin; Xi, Wang

    2010-03-01

    Visual cognition, as one of the fundamental aspects of cognitive neuroscience, is generally associated with high-order brain functions in animals and human. Drosophila, as a model organism, shares certain features of visual cognition in common with mammals at the genetic, molecular, cellular, and even higher behavioral levels. From learning and memory to decision making, Drosophila covers a broad spectrum of higher cognitive behaviors beyond what we had expected. Armed with powerful tools of genetic manipulation in Drosophila, an increasing number of studies have been conducted in order to elucidate the neural circuit mechanisms underlying these cognitive behaviors from a genes-brain-behavior perspective. The goal of this review is to integrate the most important studies on visual cognition in Drosophila carried out in mainland China during the last decade into a body of knowledge encompassing both the basic neural operations and circuitry of higher brain function in Drosophila. Here, we consider a series of the higher cognitive behaviors beyond learning and memory, such as visual pattern recognition, feature and context generalization, different feature memory traces, salience-based decision, attention-like behavior, and cross-modal leaning and memory. We discuss the possible general gain-gating mechanism implementing by dopamine - mushroom body circuit in fly's visual cognition. We hope that our brief review on this aspect will inspire further study on visual cognition in flies, or even beyond.

  2. Impaired Visual Attention in Children with Dyslexia.

    ERIC Educational Resources Information Center

    Heiervang, Einar; Hugdahl, Kenneth

    2003-01-01

    A cue-target visual attention task was administered to 25 children (ages 10-12) with dyslexia. Results showed a general pattern of slower responses in the children with dyslexia compared to controls. Subjects also had longer reaction times in the short and long cue-target interval conditions (covert and overt shift of attention). (Contains…

  3. Real-time vision, tactile cues, and visual form agnosia: removing haptic feedback from a “natural” grasping task induces pantomime-like grasps

    PubMed Central

    Whitwell, Robert L.; Ganel, Tzvi; Byrne, Caitlin M.; Goodale, Melvyn A.

    2015-01-01

    Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. “Natural” prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object (“haptics-based object information”) once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets (“grip scaling”) when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF’s grip scaling slopes. In the second experiment, we examined an “unnatural” grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts. PMID:25999834

  4. Real-time vision, tactile cues, and visual form agnosia: removing haptic feedback from a "natural" grasping task induces pantomime-like grasps.

    PubMed

    Whitwell, Robert L; Ganel, Tzvi; Byrne, Caitlin M; Goodale, Melvyn A

    2015-01-01

    Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. "Natural" prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object ("haptics-based object information") once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets ("grip scaling") when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF's grip scaling slopes. In the second experiment, we examined an "unnatural" grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts.

  5. Spatial Sequences, but Not Verbal Sequences, Are Vulnerable to General Interference during Retention in Working Memory

    ERIC Educational Resources Information Center

    Morey, Candice C.; Miron, Monica D.

    2016-01-01

    Among models of working memory, there is not yet a consensus about how to describe functions specific to storing verbal or visual-spatial memories. We presented aural-verbal and visual-spatial lists simultaneously and sometimes cued one type of information after presentation, comparing accuracy in conditions with and without informative…

  6. Development and Exchange of Instructional Resources in Water Quality Control Programs, III: Selecting Audio-Visual Equipment.

    ERIC Educational Resources Information Center

    Moon, Donald K.

    This document is one in a series of reports which reviews instructional materials and equipment and offers suggestions about how to select equipment. Topics discussed include: (1) the general criteria for audio-visual equipment selection such as performance, safety, comparability, sturdiness and repairability; and (2) specific equipment criteria…

  7. Time-Sharing-Based Synchronization and Performance Evaluation of Color-Independent Visual-MIMO Communication.

    PubMed

    Kwon, Tae-Ho; Kim, Jai-Eun; Kim, Ki-Doo

    2018-05-14

    In the field of communication, synchronization is always an important issue. The communication between a light-emitting diode (LED) array (LEA) and a camera is known as visual multiple-input multiple-output (MIMO), for which the data transmitter and receiver must be synchronized for seamless communication. In visual-MIMO, LEDs generally have a faster data rate than the camera. Hence, we propose an effective time-sharing-based synchronization technique with its color-independent characteristics providing the key to overcome this synchronization problem in visual-MIMO communication. We also evaluated the performance of our synchronization technique by varying the distance between the LEA and camera. A graphical analysis is also presented to compare the symbol error rate (SER) at different distances.

  8. A comparative study on visual choice reaction time for different colors in females.

    PubMed

    Balakrishnan, Grrishma; Uppinakudru, Gurunandan; Girwar Singh, Gaur; Bangera, Shobith; Dutt Raghavendra, Aswini; Thangavel, Dinesh

    2014-01-01

    Reaction time is one of the important methods to study a person's central information processing speed and coordinated peripheral movement response. Visual choice reaction time is a type of reaction time and is very important for drivers, pilots, security guards, and so forth. Previous studies were mainly on simple reaction time and there are very few studies on visual choice reaction time. The aim of our study was to compare the visual choice reaction time for red, green, and yellow colors of 60 healthy undergraduate female volunteers. After giving adequate practice, visual choice reaction time was recorded for red, green, and yellow colors using reaction time machine (RTM 608, Medicaid, Chandigarh). Repeated measures of ANOVA and Bonferroni multiple comparison were used for analysis and P < 0.05 was considered statistically significant. The results showed that both red and green had significantly less choice visual choice reaction (P values <0.0001 and 0.0002) when compared with yellow. This could be because individual color mental processing time for yellow color is more than red and green.

  9. Stimulus-related independent component and voxel-wise analysis of human brain activity during free viewing of a feature film.

    PubMed

    Lahnakoski, Juha M; Salmi, Juha; Jääskeläinen, Iiro P; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko

    2012-01-01

    Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments.

  10. Stimulus-Related Independent Component and Voxel-Wise Analysis of Human Brain Activity during Free Viewing of a Feature Film

    PubMed Central

    Lahnakoski, Juha M.; Salmi, Juha; Jääskeläinen, Iiro P.; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko

    2012-01-01

    Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments. PMID:22496909

  11. Spatial Visualization in Introductory Geology Courses

    NASA Astrophysics Data System (ADS)

    Reynolds, S. J.

    2004-12-01

    Visualization is critical to solving most geologic problems, which involve events and processes across a broad range of space and time. Accordingly, spatial visualization is an essential part of undergraduate geology courses. In such courses, students learn to visualize three-dimensional topography from two-dimensional contour maps, to observe landscapes and extract clues about how that landscape formed, and to imagine the three-dimensional geometries of geologic structures and how these are expressed on the Earth's surface or on geologic maps. From such data, students reconstruct the geologic history of areas, trying to visualize the sequence of ancient events that formed a landscape. To understand the role of visualization in student learning, we developed numerous interactive QuickTime Virtual Reality animations to teach students the most important visualization skills and approaches. For topography, students can spin and tilt contour-draped, shaded-relief terrains, flood virtual landscapes with water, and slice into terrains to understand profiles. To explore 3D geometries of geologic structures, they interact with virtual blocks that can be spun, sliced into, faulted, and made partially transparent to reveal internal structures. They can tilt planes to see how they interact with topography, and spin and tilt geologic maps draped over digital topography. The GeoWall system allows students to see some of these materials in true stereo. We used various assessments to research the effectiveness of these materials and to document visualization strategies students use. Our research indicates that, compared to control groups, students using such materials improve more in their geologic visualization abilities and in their general visualization abilities as measured by a standard spatial visualization test. Also, females achieve greater gains, improving their general visualization abilities to the same level as males. Misconceptions that students carry obstruct learning, but are largely undocumented. Many students, for example, cannot visualize that the landscape in which rock layers were deposited was different than the landscape in which the rocks are exposed today, even in the Grand Canyon.

  12. Visualization of Time-Series Sensor Data to Inform the Design of Just-In-Time Adaptive Stress Interventions.

    PubMed

    Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh

    2015-09-01

    We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs.

  13. Visualization of Time-Series Sensor Data to Inform the Design of Just-In-Time Adaptive Stress Interventions

    PubMed Central

    Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J. Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh

    2015-01-01

    We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs. PMID:26539566

  14. The presentation of expert testimony via live audio-visual communication.

    PubMed

    Miller, R D

    1991-01-01

    As part of a national effort to improve efficiency in court procedures, the American Bar Association has recommended, on the basis of a number of pilot studies, increased use of current audio-visual technology, such as telephone and live video communication, to eliminate delays caused by unavailability of participants in both civil and criminal procedures. Although these recommendations were made to facilitate court proceedings, and for the convenience of attorneys and judges, they also have the potential to save significant time for clinical expert witnesses as well. The author reviews the studies of telephone testimony that were done by the American Bar Association and other legal research groups, as well as the experience in one state forensic evaluation and treatment center. He also reviewed the case law on the issue of remote testimony. He then presents data from a national survey of state attorneys general concerning the admissibility of testimony via audio-visual means, including video depositions. Finally, he concludes that the option to testify by telephone provides a significant savings in precious clinical time for forensic clinicians in public facilities, and urges that such clinicians work actively to convince courts and/or legislatures in states that do not permit such testimony (currently the majority), to consider accepting it, to improve the effective use of scarce clinical resources in public facilities.

  15. Predictive and postdictive mechanisms jointly contribute to visual awareness.

    PubMed

    Soga, Ryosuke; Akaishi, Rei; Sakai, Katsuyuki

    2009-09-01

    One of the fundamental issues in visual awareness is how we are able to perceive the scene in front of our eyes on time despite the delay in processing visual information. The prediction theory postulates that our visual system predicts the future to compensate for such delays. On the other hand, the postdiction theory postulates that our visual awareness is inevitably a delayed product. In the present study we used flash-lag paradigms in motion and color domains and examined how the perception of visual information at the time of flash is influenced by prior and subsequent visual events. We found that both types of event additively influence the perception of the present visual image, suggesting that our visual awareness results from joint contribution of predictive and postdictive mechanisms.

  16. Car driving in schizophrenia: can visual memory and organization make a difference?

    PubMed

    Lipskaya-Velikovsky, Lena; Kotler, Moshe; Weiss, Penina; Kaspi, Maya; Gamzo, Shimrit; Ratzon, Navah

    2013-09-01

    Driving is a meaningful occupation which is ascribed to functional independence in schizophrenia. Although it is estimated that individuals with schizophrenia have two times more traffic accidents, little research has been done in this field. Present research explores differences in mental status, visual working memory and visual organization between drivers and non-drivers with schizophrenia in comparison to healthy drivers. There were three groups in the study: 20 drivers with schizophrenia, 20 non-driving individuals with schizophrenia and 20 drivers without schizophrenia (DWS). Visual perception was measured with Rey-Osterrieth Complex Figure test and a general cognitive status with Mini-Mental State Examination. The general cognitive status predicted actual driving situation in people with schizophrenia. No statistically significant differences were found between driving and non-driving persons with schizophrenia on any of the visual parameters tested, although these abilities were significantly lower than those of DWS. The research demonstrates that impairment of visual abilities does not prevent people with schizophrenia from driving and emphasizes the importance of general cognitive status for complex and multidimensional everyday tasks. The findings support the need for further investigation in the field of car driving for this population - a move that will considerably contribute to the participation and well-being. Implication for Rehabilitation Unique approach for driving evaluation in schizophrenia should be designed since direct applications of knowledge and practice acquired from other populations are not reliable. This research demonstrates that visual perception deficits in schizophrenia do not prevent clients from driving, and general cognitive status appeared to be a valid determinant for actual driving. We recommended usage of a general test of cognition such as Mini-Mental State Examination, or conjunction number of cognitive factors such as executive functions (e.g., Trail Making Test) and attention (e.g., Continuous Performance Test) in addition to spatial-visual ability tests (e.g., Rey-Osterrieth Complex Figure test) for considering driving status in schizophrenia.

  17. Reconsideration of Serial Visual Reversal Learning in Octopus (Octopus vulgaris) from a Methodological Perspective

    PubMed Central

    Bublitz, Alexander; Weinhold, Severine R.; Strobel, Sophia; Dehnhardt, Guido; Hanke, Frederike D.

    2017-01-01

    Octopuses (Octopus vulgaris) are generally considered to possess extraordinary cognitive abilities including the ability to successfully perform in a serial reversal learning task. During reversal learning, an animal is presented with a discrimination problem and after reaching a learning criterion, the signs of the stimuli are reversed: the former positive becomes the negative stimulus and vice versa. If an animal improves its performance over reversals, it is ascribed advanced cognitive abilities. Reversal learning has been tested in octopus in a number of studies. However, the experimental procedures adopted in these studies involved pre-training on the new positive stimulus after a reversal, strong negative reinforcement or might have enabled secondary cueing by the experimenter. These procedures could have all affected the outcome of reversal learning. Thus, in this study, serial visual reversal learning was revisited in octopus. We trained four common octopuses (O. vulgaris) to discriminate between 2-dimensional stimuli presented on a monitor in a simultaneous visual discrimination task and reversed the signs of the stimuli each time the animals reached the learning criterion of ≥80% in two consecutive sessions. The animals were trained using operant conditioning techniques including a secondary reinforcer, a rod that was pushed up and down the feeding tube, which signaled the correctness of a response and preceded the subsequent primary reinforcement of food. The experimental protocol did not involve negative reinforcement. One animal completed four reversals and showed progressive improvement, i.e., it decreased its errors to criterion the more reversals it experienced. This animal developed a generalized response strategy. In contrast, another animal completed only one reversal, whereas two animals did not learn to reverse during the first reversal. In conclusion, some octopus individuals can learn to reverse in a visual task demonstrating behavioral flexibility even with a refined methodology. PMID:28223940

  18. Interactive Visualization of a Thin Disc around a Schwarzschild Black Hole

    ERIC Educational Resources Information Center

    Muller, Thomas; Frauendiener, Jorg

    2012-01-01

    In a first course in general relativity, the Schwarzschild spacetime is the most discussed analytic solution to Einstein's field equations. Unfortunately, there is rarely enough time to study the optical consequences of the bending of light for some advanced examples. In this paper, we present how the visual appearance of a thin disc around a…

  19. Art Teachers as Leaders of Authentic Art Integration

    ERIC Educational Resources Information Center

    Smilan, Cathy; Miraglia, Kathy Marzilli

    2009-01-01

    A myriad of issues affect PK-12 public school art educators' work lives, including how and by whom art is taught in schools. Chief among these issues are budgetary shortfalls, time constraints, and general misconceptions that anyone who enjoys the visual arts is capable of teaching the visual arts. Perpetuation of this myth impacts art education,…

  20. Uncovering neurodevelopmental windows of susceptibility to manganese exposure using dentine microspatial analyses.

    PubMed

    Claus Henn, Birgit; Austin, Christine; Coull, Brent A; Schnaas, Lourdes; Gennings, Chris; Horton, Megan K; Hernández-Ávila, Mauricio; Hu, Howard; Téllez-Rojo, Martha Maria; Wright, Robert O; Arora, Manish

    2018-02-01

    Associations between manganese (Mn) and neurodevelopment may depend on dose and exposure timing, but most studies cannot measure exposure variability over time well. We apply temporally informative tooth-matrix biomarkers to uncover windows of susceptibility in early life when Mn is associated with visual motor ability in childhood. We also explore effect modification by lead (Pb) and child sex. Participants were drawn from the ELEMENT (Early Life Exposures in MExico and NeuroToxicology) longitudinal birth cohort studies. We reconstructed dose and timing of prenatal and early postnatal Mn and Pb exposures for 138 children by analyzing deciduous teeth using laser ablation-inductively coupled plasma-mass spectrometry. Neurodevelopment was assessed between 6 and 16 years of age using the Wide Range Assessment of Visual Motor Abilities (WRAVMA). Mn associations with total WRAVMA scores and subscales were estimated with multivariable generalized additive mixed models. We examined Mn interactions with Pb and child sex in stratified models. Levels of dentine Mn were highest in the second trimester and declined steeply over the prenatal period, with a slower rate of decline after birth. Mn was positively associated with visual spatial and total WRAVMA scores in the second trimester, among children with lower (< median) tooth Pb levels: one standard deviation (SD) increase in ln-transformed dentine Mn at 150 days before birth was associated with a 0.15 [95% CI: 0.04, 0.26] SD increase in total score. This positive association was not observed at high Pb levels. In contrast to the prenatal period, significant negative associations were found in the postnatal period from ~ 6 to 12 months of age, among boys only: one SD increase in ln-transformed dentine Mn was associated with a 0.11 [95% CI: - 0.001, - 0.22] to 0.16 [95% CI: - 0.04, - 0.28] SD decrease in visual spatial score. Using tooth-matrix biomarkers with fine scale temporal profiles of exposure, we found discrete developmental windows in which Mn was associated with visual-spatial abilities. Our results suggest that Mn associations are driven in large part by exposure timing, with beneficial effects found for prenatal levels and toxic effects found for postnatal levels. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. An interactive visualization tool for mobile objects

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tetsuo

    Recent advancements in mobile devices---such as Global Positioning System (GPS), cellular phones, car navigation system, and radio-frequency identification (RFID)---have greatly influenced the nature and volume of data about individual-based movement in space and time. Due to the prevalence of mobile devices, vast amounts of mobile objects data are being produced and stored in databases, overwhelming the capacity of traditional spatial analytical methods. There is a growing need for discovering unexpected patterns, trends, and relationships that are hidden in the massive mobile objects data. Geographic visualization (GVis) and knowledge discovery in databases (KDD) are two major research fields that are associated with knowledge discovery and construction. Their major research challenges are the integration of GVis and KDD, enhancing the ability to handle large volume mobile objects data, and high interactivity between the computer and users of GVis and KDD tools. This dissertation proposes a visualization toolkit to enable highly interactive visual data exploration for mobile objects datasets. Vector algebraic representation and online analytical processing (OLAP) are utilized for managing and querying the mobile object data to accomplish high interactivity of the visualization tool. In addition, reconstructing trajectories at user-defined levels of temporal granularity with time aggregation methods allows exploration of the individual objects at different levels of movement generality. At a given level of generality, individual paths can be combined into synthetic summary paths based on three similarity measures, namely, locational similarity, directional similarity, and geometric similarity functions. A visualization toolkit based on the space-time cube concept exploits these functionalities to create a user-interactive environment for exploring mobile objects data. Furthermore, the characteristics of visualized trajectories are exported to be utilized for data mining, which leads to the integration of GVis and KDD. Case studies using three movement datasets (personal travel data survey in Lexington, Kentucky, wild chicken movement data in Thailand, and self-tracking data in Utah) demonstrate the potential of the system to extract meaningful patterns from the otherwise difficult to comprehend collections of space-time trajectories.

  2. Towards an Integrated Flood Preparedness and Response: Centralized Data Access, Analysis, and Visualization

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2014-12-01

    Recent advances in internet and cyberinfrastucture technologies have provided the capability to understand the hydrological and meteorological systems at space and time scales that are critical for making accurate understanding and prediction of flooding, and emergency preparedness. A novel example of a cyberinfrastructure platform for flood preparedness and response is the Iowa Flood Center's Iowa Flood Information System (IFIS). IFIS is a one-stop web-platform to access community-based flood conditions, forecasts, visualizations, inundation maps and flood-related data, information, and applications. An enormous volume of real-time observational data from a variety of sensors and remote sensing resources (radars, rain gauges, stream sensors, etc.) and complex flood inundation models are staged on a user-friendly maps environment that is accessible to the general public. IFIS has developed into a very successful tool used by agencies, decision-makers, and the general public throughout Iowa to better understand their local watershed and their personal and community flood risk, and to monitor local stream and river levels. IFIS helps communities make better-informed decisions on the occurrence of floods, and alerts communities in advance to help minimize flood damages. IFIS is widely used by general public in Iowa and the Midwest region with over 120,000 unique users, and became main source of information for many newspapers and TV stations in Iowa. IFIS has features for general public to improve emergency preparedness, and for decision makers to support emergency response and recovery efforts. IFIS is also a great platform for educators and local authorities to educate students and public on flooding with games, easy to use interactive environment, and data rich system.

  3. The effect of visual-motion time-delays on pilot performance in a simulated pursuit tracking task

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1977-01-01

    An experimental study was made to determine the effect on pilot performance of time delays in the visual and motion feedback loops of a simulated pursuit tracking task. Three major interrelated factors were identified: task difficulty either in the form of airplane handling qualities or target frequency, the amount and type of motion cues, and time delay itself. In general, the greater the task difficulty, the smaller the time delay that could exist without degrading pilot performance. Conversely, the greater the motion fidelity, the greater the time delay that could be tolerated. The effect of motion was, however, pilot dependent.

  4. Long-Term Audience Impacts of Live Fulldome Planetarium Lectures for Earth Science and Global Change Education

    NASA Astrophysics Data System (ADS)

    Yu, K. C.; Champlin, D. M.; Goldsworth, D. A.; Raynolds, R. G.; Dechesne, M.

    2011-09-01

    Digital Earth visualization technologies, from ArcGIS to Google Earth, have allowed for the integration of complex, disparate data sets to produce visually rich and compelling three-dimensional models of sub-surface and surface resource distribution patterns. The rendering of these models allows the public to quickly understand complicated geospatial relationships that would otherwise take much longer to explain using traditional media. At the Denver Museum of Nature & Science (DMNS), we have used such visualization technologies, including real-time virtual reality software running in the immersive digital "fulldome" Gates Planetarium, to impact the community through topical policy presentations. DMNS public lectures have covered regional issues like water resources, as well as global topics such as earthquakes, tsunamis, and resource depletion. The Gates Planetarium allows an audience to have an immersive experience-similar to virtual reality "CAVE" environments found in academia-that would otherwise not be available to the general public. Public lectures in the dome allow audiences of over 100 people to comprehend dynamically changing geospatial datasets in an exciting and engaging fashion. Surveys and interviews show that these talks are effective in heightening visitor interest in the subjects weeks or months after the presentation. Many visitors take additional steps to learn more, while one was so inspired that she actively worked to bring the same programming to her children's school. These preliminary findings suggest that fulldome real-time visualizations can have a substantial long-term impact on an audience's engagement and interest in science topics.

  5. Integrating a geographic information system, a scientific visualization system and an orographic precipitation model

    USGS Publications Warehouse

    Hay, L.; Knapp, L.

    1996-01-01

    Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.

  6. Iowa Flood Information System

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.; Goska, R.; Mantilla, R.; Weber, L. J.; Young, N.

    2011-12-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 500 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.

  7. Flood Risk Management in Iowa through an Integrated Flood Information System

    NASA Astrophysics Data System (ADS)

    Demir, Ibrahim; Krajewski, Witold

    2013-04-01

    The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 1100 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview and live demonstration of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.

  8. Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli

    PubMed Central

    Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.

    2010-01-01

    Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356

  9. Dynamic neuroanatomy at subcellular resolution in the zebrafish.

    PubMed

    Faucherre, Adèle; López-Schier, Hernán

    2014-01-01

    Genetic means to visualize and manipulate neuronal circuits in the intact animal have revolutionized neurobiology. "Dynamic neuroanatomy" defines a range of approaches aimed at quantifying the architecture or subcellular organization of neurons over time during their development, regeneration, or degeneration. A general feature of these approaches is their reliance on the optical isolation of defined neurons in toto by genetically expressing markers in one or few cells. Here we use the afferent neurons of the lateral line as an example to describe a simple method for the dynamic neuroanatomical study of axon terminals in the zebrafish by laser-scanning confocal microscopy.

  10. Visualizing Syllables: Real-Time Computerized Feedback within a Speech-Language Intervention

    ERIC Educational Resources Information Center

    DeThorne, Laura; Aparicio Betancourt, Mariana; Karahalios, Karrie; Halle, Jim; Bogue, Ellen

    2015-01-01

    Computerized technologies now offer unprecedented opportunities to provide real-time visual feedback to facilitate children's speech-language development. We employed a mixed-method design to examine the effectiveness of two speech-language interventions aimed at facilitating children's multisyllabic productions: one incorporated a novel…

  11. Filming the invisible - time-resolved visualization of compressible flows

    NASA Astrophysics Data System (ADS)

    Kleine, H.

    2010-04-01

    Essentially all processes in gasdynamics are invisible to the naked eye as they occur in a transparent medium. The task to observe them is further complicated by the fact that most of these processes are also transient, often with characteristic times that are considerably below the threshold of human perception. Both difficulties can be overcome by combining visualization methods that reveal changes in the transparent medium, and high-speed photography techniques that “stop” the motion of the flow. The traditional approach is to reconstruct a transient process from a series of single images, each taken in a different experiment at a different instant. This approach, which is still widely used today, can only be expected to give reliable results when the process is reproducible. Truly time-resolved visualization, which yields a sequence of flow images in a single experiment, has been attempted for more than a century, but many of the developed camera systems were characterized by a high level of complexity and limited quality of the results. Recent advances in digital high-speed photography have changed this situation and have provided the tools to investigate, with relative ease and in sufficient detail, the true development of a transient flow with characteristic time scales down to one microsecond. This paper discusses the potential and the limitations one encounters when using density-sensitive visualization techniques in time-resolved mode. Several examples illustrate how this approach can reveal and explain a number of previously undetected phenomena in a variety of highly transient compressible flows. It is demonstrated that time-resolved visualization offers numerous advantages which normally outweigh its shortcomings, mainly the often-encountered loss in resolution. Apart from the capability to track the location and/or shape of flow features in space and time, adequate time-resolved visualization allows one to observe the development of deliberately introduced near-isentropic perturbation wavelets. This new diagnostic tool can be used to qualitatively and quantitatively determine otherwise inaccessible thermodynamic properties of a compressible flow.

  12. Cognitive programs: software for attention's executive

    PubMed Central

    Tsotsos, John K.; Kruijne, Wouter

    2014-01-01

    What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual attention. PMID:25505430

  13. Accelerating Large Data Analysis By Exploiting Regularities

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  14. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granger, Brian R.; Chang, Yi -Chien; Wang, Yan

    Here, the complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique meta-graph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction networkmore » between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues.« less

  16. Effect of single-visit VIA and cryotherapy cervical cancer prevention program in Roi Et, Thailand: a preliminary report.

    PubMed

    Chumworathayi, Bandit; Blumenthal, Paul D; Limpaphayom, Khunying Kobchitt; Kamsa-Ard, Supot; Wongsena, Metee; Supaatakorn, Pongsatorn

    2010-02-01

    To assess the effect of introducing visual inspection with acetic acid and cryotherapy on cervical cancer incidence rates in Roi Et province over time, between 1997 and 2006, and compare this with two nearby provinces. Data from two cancer registration units, one in Srinagarind Hospital and another in Ubon Ratchathani Cancer Center (to which all cervical cancer patients were referred from the three study provinces) were registered, extracted, combined and analyzed using a generalized estimation equation. Cervical cancer detection rates improved. These are represented by the apparent increased incidence rates in Roi Et province during the study period compared with two nearby provinces (P = 0.01), equivalent to a doubling of the previously reported age-standardized incidence ratio and three times its baseline in 2006. A single-visit approach to cervical cancer prevention in Roi Et province using visual inspection with acetic acid and cryotherapy appeared to have an effect in revealing an increased cervical cancer incidence rate by achieving higher coverage, resulting in increased case finding.

  17. Classification of EEG abnormalities in partial epilepsy with simultaneous EEG-fMRI recordings.

    PubMed

    Pedreira, C; Vaudano, A E; Thornton, R C; Chaudhary, U J; Vulliemoz, S; Laufs, H; Rodionov, R; Carmichael, D W; Lhatoo, S D; Guye, M; Quian Quiroga, R; Lemieux, L

    2014-10-01

    Scalp EEG recordings and the classification of interictal epileptiform discharges (IED) in patients with epilepsy provide valuable information about the epileptogenic network, particularly by defining the boundaries of the "irritative zone" (IZ), and hence are helpful during pre-surgical evaluation of patients with severe refractory epilepsies. The current detection and classification of epileptiform signals essentially rely on expert observers. This is a very time-consuming procedure, which also leads to inter-observer variability. Here, we propose a novel approach to automatically classify epileptic activity and show how this method provides critical and reliable information related to the IZ localization beyond the one provided by previous approaches. We applied Wave_clus, an automatic spike sorting algorithm, for the classification of IED visually identified from pre-surgical simultaneous Electroencephalogram-functional Magnetic Resonance Imagining (EEG-fMRI) recordings in 8 patients affected by refractory partial epilepsy candidate for surgery. For each patient, two fMRI analyses were performed: one based on the visual classification and one based on the algorithmic sorting. This novel approach successfully identified a total of 29 IED classes (compared to 26 for visual identification). The general concordance between methods was good, providing a full match of EEG patterns in 2 cases, additional EEG information in 2 other cases and, in general, covering EEG patterns of the same areas as expert classification in 7 of the 8 cases. Most notably, evaluation of the method with EEG-fMRI data analysis showed hemodynamic maps related to the majority of IED classes representing improved performance than the visual IED classification-based analysis (72% versus 50%). Furthermore, the IED-related BOLD changes revealed by using the algorithm were localized within the presumed IZ for a larger number of IED classes (9) in a greater number of patients than the expert classification (7 and 5, respectively). In contrast, in only one case presented the new algorithm resulted in fewer classes and activation areas. We propose that the use of automated spike sorting algorithms to classify IED provides an efficient tool for mapping IED-related fMRI changes and increases the EEG-fMRI clinical value for the pre-surgical assessment of patients with severe epilepsy. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. First seizure while driving (FSWD)--an underestimated phenomenon?

    PubMed

    Pohlmann-Eden, Bernd; Hynick, Nina; Legg, Karen

    2013-07-01

    Seizures while driving are a well known occurrence in established epilepsy and have significant impact on driving privileges. There is no data available on patients who experience their first (diagnosed) seizure while driving (FSWD). Out of 311 patients presenting to the Halifax First Seizure Clinic between 2008 and 2011, 158 patients met the criteria of a first seizure (FS) or drug-naïve, newly diagnosed epilepsy (NDE). A retrospective chart review was conducted. FSWD was evaluated for 1) prevalence, 2) clinical presentation, 3) coping strategies, and 4) length of time driving before seizure occurrence. The prevalence of FSWD was 8.2%. All 13 patients experienced impaired consciousness. Eleven patients had generalized tonic-clonic seizures, one starting with a déjà-vu evolving to visual aura and a complex partial seizure; three directly from visual auras. Two patients had complex partial seizures, one starting with an autonomic seizure. In response to their seizure, patients reported they were i) able to actively stop the car (n=4, three had visual auras), ii) not able to stop the car resulting in accident (n=7), or iii) passenger was able to pull the car over (n=2). One accident was fatal to the other party. Twelve out of 13 patients had been driving for less than one hour. FSWD is frequent and possibly underrecognized. FSWD often lead to accidents, which occur less if preceded by simple partial seizures. Pathophysiological mechanisms remain uncertain; it is still speculative if complex visuo-motor tasks required while driving play a role in this scenario.

  19. The GPlates Portal: Cloud-based interactive 3D and 4D visualization of global geological and geophysical data and models in a browser

    NASA Astrophysics Data System (ADS)

    Müller, Dietmar; Qin, Xiaodong; Sandwell, David; Dutkiewicz, Adriana; Williams, Simon; Flament, Nicolas; Maus, Stefan; Seton, Maria

    2017-04-01

    The pace of scientific discovery is being transformed by the availability of 'big data' and open access, open source software tools. These innovations open up new avenues for how scientists communicate and share data and ideas with each other, and with the general public. Here, we describe our efforts to bring to life our studies of the Earth system, both at present day and through deep geological time. The GPlates Portal (portal.gplates.org) is a gateway to a series of virtual globes based on the Cesium Javascript library. The portal allows fast interactive visualization of global geophysical and geological data sets, draped over digital terrain models. The globes use WebGL for hardware-accelerated graphics and are cross-platform and cross-browser compatible with complete camera control. The globes include a visualization of a high-resolution global digital elevation model and the vertical gradient of the global gravity field, highlighting small-scale seafloor fabric such as abyssal hills, fracture zones and seamounts in unprecedented detail. The portal also features globes portraying seafloor geology and a global data set of marine magnetic anomaly identifications. The portal is specifically designed to visualize models of the Earth through geological time. These space-time globes include tectonic reconstructions of the Earth's gravity and magnetic fields, and several models of long-wavelength surface dynamic topography through time, including the interactive plotting of vertical motion histories at selected locations. The portal has been visited over half a million times since its inception in October 2015, as tracked by google analytics, and the globes have been featured in numerous media articles around the world. This demonstrates the high demand for fast visualization of global spatial big data, both for the present-day as well as through geological time. The globes put the on-the-fly visualization of massive data sets at the fingertips of end-users to stimulate teaching and learning and novel avenues of inquiry. This technology offers many future opportunities for providing additional functionality, especially on-the-fly big data analytics. Müller, R.D., Qin, X., Sandwell, D.T., Dutkiewicz, A., Williams, S.E., Flament, N., Maus, S. and Seton, M, 2016, The GPlates Portal: Cloud-based interactive 3D visualization of global geophysical and geological data in a web browser, PLoS ONE 11(3): e0150883. doi:10.1371/ journal.pone.0150883

  20. Retinotopic maps and foveal suppression in the visual cortex of amblyopic adults.

    PubMed

    Conner, Ian P; Odom, J Vernon; Schwartz, Terry L; Mendola, Janine D

    2007-08-15

    Amblyopia is a developmental visual disorder associated with loss of monocular acuity and sensitivity as well as profound alterations in binocular integration. Abnormal connections in visual cortex are known to underlie this loss, but the extent to which these abnormalities are regionally or retinotopically specific has not been fully determined. This functional magnetic resonance imaging (fMRI) study compared the retinotopic maps in visual cortex produced by each individual eye in 19 adults (7 esotropic strabismics, 6 anisometropes and 6 controls). In our standard viewing condition, the non-tested eye viewed a dichoptic homogeneous mid-level grey stimulus, thereby permitting some degree of binocular interaction. Regions-of-interest analysis was performed for extrafoveal V1, extrafoveal V2 and the foveal representation at the occipital pole. In general, the blood oxygenation level-dependent (BOLD) signal was reduced for the amblyopic eye. At the occipital pole, population receptive fields were shifted to represent more parafoveal locations for the amblyopic eye, compared with the fellow eye, in some subjects. Interestingly, occluding the fellow eye caused an expanded foveal representation for the amblyopic eye in one early-onset strabismic subject with binocular suppression, indicating real-time cortical remapping. In addition, a few subjects actually showed increased activity in parietal and temporal cortex when viewing with the amblyopic eye. We conclude that, even in a heterogeneous population, abnormal early visual experience commonly leads to regionally specific cortical adaptations.

  1. Visualization of the hot chocolate sound effect by spectrograms

    NASA Astrophysics Data System (ADS)

    Trávníček, Z.; Fedorchenko, A. I.; Pavelka, M.; Hrubý, J.

    2012-12-01

    We present an experimental and a theoretical analysis of the hot chocolate effect. The sound effect is evaluated using time-frequency signal processing, resulting in a quantitative visualization by spectrograms. This method allows us to capture the whole phenomenon, namely to quantify the dynamics of the rising pitch. A general form of the time dependence volume fraction of the bubbles is proposed. We show that the effect occurs due to the nonlinear dependence of the speed of sound in the gas/liquid mixture on the volume fraction of the bubbles and the nonlinear time dependence of the volume fraction of the bubbles.

  2. Five-year study of ocular injuries due to fireworks in India.

    PubMed

    Malik, Archana; Bhala, Soniya; Arya, Sudesh K; Sood, Sunandan; Narang, Subina

    2013-08-01

    To study the demographic profile, cause, type and severity of ocular injuries, their complications and final visual outcome following fireworks around the time of Deepawali in India. Case records of patients who presented with firework-related injuries during 2005-2009 at the time of Deepawali were reviewed. Data with respect to demographic profile of patients, cause and time of injury, time of presentation and types of intervention were analyzed. Visual acuity at presentation and final follow-up, anterior and posterior segment findings, and any diagnostic and surgical interventions carried out were noted. One hundred and one patients presented with firework-related ocular injuries, of which 77.5 % were male. The mean age was 17.60 ± 11.9 years, with 54 % being ≤14 years of age. The mean time of presentation was 8.9 h. Seventeen patients had open globe injury (OGI) and 84 had closed globe injury (CGI). Fountains were the most common cause of CGI and bullet bombs were the most common cause of OGI. Mean log MAR visual acuity at presentation was 0.64 and 1.22 and at last follow-up was 0.09 and 0.58 for CGI and OGI, respectively (p < 0.05). Patients with CGI had a better visual outcome. Three patients with OGI developed permanent blindness. Factors associated with poor visual outcome included poor initial visual acuity, OGI, intraocular foreign body (IOFB), retinal detachment and development of endophthalmitis. Firework injuries were seen mostly in males and children. Poor visual outcome was associated with poor initial visual acuity, OGI, IOFB, retinal detachment and development of endophthalmitis, while most patients with CGI regained good vision.

  3. Spurious One-Month and One-Year Periods in Visual Observations of Variable Stars

    NASA Astrophysics Data System (ADS)

    Percy, J. R.

    2015-12-01

    Visual observations of variable stars, when time-series analyzed with some algorithms such as DC-DFT in vstar, show spurious periods at or close to one synodic month (29.5306 days), and also at about a year, with an amplitude of typically a few hundredths of a magnitude. The one-year periods have been attributed to the Ceraski effect, which was believed to be a physiological effect of the visual observing process. This paper reports on time-series analysis, using DC-DFT in vstar, of visual observations (and in some cases, V observations) of a large number of stars in the AAVSO International Database, initially to investigate the one-month periods. The results suggest that both the one-month and one-year periods are actually due to aliasing of the stars' very low-frequency variations, though they do not rule out very low-amplitude signals (typically 0.01 to 0.02 magnitude) which may be due to a different process, such as a physiological one. Most or all of these aliasing effects may be avoided by using a different algorithm, which takes explicit account of the window function of the data, and/or by being fully aware of the possible presence of and aliasing by very low-frequency variations.

  4. Isolating Visual and Proprioceptive Components of Motor Sequence Learning in ASD.

    PubMed

    Sharer, Elizabeth A; Mostofsky, Stewart H; Pascual-Leone, Alvaro; Oberman, Lindsay M

    2016-05-01

    In addition to defining impairments in social communication skills, individuals with autism spectrum disorder (ASD) also show impairments in more basic sensory and motor skills. Development of new skills involves integrating information from multiple sensory modalities. This input is then used to form internal models of action that can be accessed when both performing skilled movements, as well as understanding those actions performed by others. Learning skilled gestures is particularly reliant on integration of visual and proprioceptive input. We used a modified serial reaction time task (SRTT) to decompose proprioceptive and visual components and examine whether patterns of implicit motor skill learning differ in ASD participants as compared with healthy controls. While both groups learned the implicit motor sequence during training, healthy controls showed robust generalization whereas ASD participants demonstrated little generalization when visual input was constant. In contrast, no group differences in generalization were observed when proprioceptive input was constant, with both groups showing limited degrees of generalization. The findings suggest, when learning a motor sequence, individuals with ASD tend to rely less on visual feedback than do healthy controls. Visuomotor representations are considered to underlie imitative learning and action understanding and are thereby crucial to social skill and cognitive development. Thus, anomalous patterns of implicit motor learning, with a tendency to discount visual feedback, may be an important contributor in core social communication deficits that characterize ASD. Autism Res 2016, 9: 563-569. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  5. Synergies between optical and physical variables in intercepting parabolic targets

    PubMed Central

    Gómez, José; López-Moliner, Joan

    2013-01-01

    Interception requires precise estimation of time-to-contact (TTC) information. A long-standing view posits that all relevant information for extracting TTC is available in the angular variables, which result from the projection of distal objects onto the retina. The different timing models rooted in this tradition have consequently relied on combining visual angle and its rate of expansion in different ways with tau being the most well-known solution for TTC. The generalization of these models to timing parabolic trajectories is not straightforward. For example, these different combinations rely on isotropic expansion and usually assume first-order information only, neglecting acceleration. As a consequence no optical formulations have been put forward so far to specify TTC of parabolic targets with enough accuracy. It is only recently that context-dependent physical variables have been shown to play an important role in TTC estimation. Known physical size and gravity can adequately explain observed data of linear and free-falling trajectories, respectively. Yet, a full timing model for specifying parabolic TTC has remained elusive. We here derive two formulations that specify TTC for parabolic ball trajectories. The first specification extends previous models in which known size is combined with thresholding visual angle or its rate of expansion to the case of fly balls. To efficiently use this model, observers need to recover the 3D radial velocity component of the trajectory which conveys the isotropic expansion. The second one uses knowledge of size and gravity combined with ball visual angle and elevation angle. Taking into account the noise due to sensory measurements, we simulate the expected performance of these models in terms of accuracy and precision. While the model that combines expansion information and size knowledge is more efficient during the late trajectory, the second one is shown to be efficient along all the flight. PMID:23720614

  6. Overt attention toward oriented objects in free-viewing barn owls.

    PubMed

    Harmening, Wolf Maximilian; Orlowski, Julius; Ben-Shahar, Ohad; Wagner, Hermann

    2011-05-17

    Visual saliency based on orientation contrast is a perceptual product attributed to the functional organization of the mammalian brain. We examined this visual phenomenon in barn owls by mounting a wireless video microcamera on the owls' heads and confronting them with visual scenes that contained one differently oriented target among similarly oriented distracters. Without being confined by any particular task, the owls looked significantly longer, more often, and earlier at the target, thus exhibiting visual search strategies so far demonstrated in similar conditions only in primates. Given the considerable differences in phylogeny and the structure of visual pathways between owls and humans, these findings suggest that orientation saliency has computational optimality in a wide variety of ecological contexts, and thus constitutes a universal building block for efficient visual information processing in general.

  7. Grip Strength Is Associated With Cognitive Performance in Schizophrenia and the General Population: A UK Biobank Study of 476559 Participants.

    PubMed

    Firth, Joseph; Stubbs, Brendon; Vancampfort, Davy; Firth, Josh A; Large, Matthew; Rosenbaum, Simon; Hallgren, Mats; Ward, Philip B; Sarris, Jerome; Yung, Alison R

    2018-06-06

    Handgrip strength may provide an easily-administered marker of cognitive functional status. However, further population-scale research examining relationships between grip strength and cognitive performance across multiple domains is needed. Additionally, relationships between grip strength and cognitive functioning in people with schizophrenia, who frequently experience cognitive deficits, has yet to be explored. Baseline data from the UK Biobank (2007-2010) was analyzed; including 475397 individuals from the general population, and 1162 individuals with schizophrenia. Linear mixed models and generalized linear mixed models were used to assess the relationship between grip strength and 5 cognitive domains (visual memory, reaction time, reasoning, prospective memory, and number memory), controlling for age, gender, bodyweight, education, and geographical region. In the general population, maximal grip strength was positively and significantly related to visual memory (coefficient [coeff] = -0.1601, standard error [SE] = 0.003), reaction time (coeff = -0.0346, SE = 0.0004), reasoning (coeff = 0.2304, SE = 0.0079), number memory (coeff = 0.1616, SE = 0.0092), and prospective memory (coeff = 0.3486, SE = 0.0092: all P < .001). In the schizophrenia sample, grip strength was strongly related to visual memory (coeff = -0.155, SE = 0.042, P < .001) and reaction time (coeff = -0.049, SE = 0.009, P < .001), while prospective memory approached statistical significance (coeff = 0.233, SE = 0.132, P = .078), and no statistically significant association was found with number memory and reasoning (P > .1). Grip strength is significantly associated with cognitive functioning in the general population and individuals with schizophrenia, particularly for working memory and processing speed. Future research should establish directionality, examine if grip strength also predicts functional and physical health outcomes in schizophrenia, and determine whether interventions which improve muscular strength impact on cognitive and real-world functioning.

  8. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    NASA Astrophysics Data System (ADS)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new ideas. This presentation will provide an update of the recent enhancements of the NEIS architecture and visualization capabilities, challenges faced, as well as ongoing research activities related to this project.

  9. Impact of paracervical block on postabortion pain in patients undergoing abortion under general anesthesia.

    PubMed

    Lazenby, Gweneth B; Fogelson, Nicholas S; Aeby, Tod

    2009-12-01

    Paracervical block is used as a way to decrease postoperative pain in patients having abortions under general anesthesia. To date, no studies have evaluated the efficacy of this practice. Patients were recruited from a university-based family planning clinic. Seventy-two patients seeking abortion under general anesthesia were enrolled into the single-blinded study. Thirty-nine patients were randomized to receive a paracervical block, and 33 were randomized to no local anesthesia. The patients completed a demographic survey and visual analog pain scales for pain prior to and at several time points after the procedure. Data regarding the need for additional pain medications postoperatively were recorded. Analysis of variance single factor and two-sample one-sided t test were used in data analysis. Experimental and control groups were similar in all measured demographic characteristics. They were also similar in gestational age, number of laminaria required, preoperative dilation, operative time, estimated blood loss and reported complications. Postoperative pain was not significantly affected by placement of a paracervical block prior to abortion under general anesthesia. The need for postoperative pain medication during recovery was similar between groups. This study does not support the hypothesized benefit of local anesthesia prior to surgical abortion under general anesthesia to reduce postoperative pain.

  10. Visualizing inequality

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2016-07-01

    The study of socioeconomic inequality is of substantial importance, scientific and general alike. The graphic visualization of inequality is commonly conveyed by Lorenz curves. While Lorenz curves are a highly effective statistical tool for quantifying the distribution of wealth in human societies, they are less effective a tool for the visual depiction of socioeconomic inequality. This paper introduces an alternative to Lorenz curves-the hill curves. On the one hand, the hill curves are a potent scientific tool: they provide detailed scans of the rich-poor gaps in human societies under consideration, and are capable of accommodating infinitely many degrees of freedom. On the other hand, the hill curves are a powerful infographic tool: they visualize inequality in a most vivid and tangible way, with no quantitative skills that are required in order to grasp the visualization. The application of hill curves extends far beyond socioeconomic inequality. Indeed, the hill curves are highly effective 'hyperspectral' measures of statistical variability that are applicable in the context of size distributions at large. This paper establishes the notion of hill curves, analyzes them, and describes their application in the context of general size distributions.

  11. AWE: Aviation Weather Data Visualization Environment

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Lodha, Suresh K.; Norvig, Peter (Technical Monitor)

    2000-01-01

    Weather is one of the major causes of aviation accidents. General aviation (GA) flights account for 92% of all the aviation accidents, In spite of all the official and unofficial sources of weather visualization tools available to pilots, there is an urgent need for visualizing several weather related data tailored for general aviation pilots. Our system, Aviation Weather Data Visualization Environment AWE), presents graphical displays of meteorological observations, terminal area forecasts, and winds aloft forecasts onto a cartographic grid specific to the pilot's area of interest. Decisions regarding the graphical display and design are made based on careful consideration of user needs. Integral visual display of these elements of weather reports is designed for the use of GA pilots as a weather briefing and route selection tool. AWE provides linking of the weather information to the flight's path and schedule. The pilot can interact with the system to obtain aviation-specific weather for the entire area or for his specific route to explore what-if scenarios and make "go/no-go" decisions. The system, as evaluated by some pilots at NASA Ames Research Center, was found to be useful.

  12. Seeing in the deep-sea: visual adaptations in lanternfishes.

    PubMed

    de Busserolles, Fanny; Marshall, N Justin

    2017-04-05

    Ecological and behavioural constraints play a major role in shaping the visual system of different organisms. In the mesopelagic zone of the deep- sea, between 200 and 1000 m, very low intensities of downwelling light remain, creating one of the dimmest habitats in the world. This ambient light is, however, enhanced by a multitude of bioluminescent signals emitted by its inhabitants, but these are generally dim and intermittent. As a result, the visual system of mesopelagic organisms has been pushed to its sensitivity limits in order to function in this extreme environment. This review covers the current body of knowledge on the visual system of one of the most abundant and intensely studied groups of mesopelagic fishes: the lanternfish (Myctophidae). We discuss how the plasticity, performance and novelty of its visual adaptations, compared with other deep-sea fishes, might have contributed to the diversity and abundance of this family.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).

  13. Seeing in the deep-sea: visual adaptations in lanternfishes

    PubMed Central

    2017-01-01

    Ecological and behavioural constraints play a major role in shaping the visual system of different organisms. In the mesopelagic zone of the deep- sea, between 200 and 1000 m, very low intensities of downwelling light remain, creating one of the dimmest habitats in the world. This ambient light is, however, enhanced by a multitude of bioluminescent signals emitted by its inhabitants, but these are generally dim and intermittent. As a result, the visual system of mesopelagic organisms has been pushed to its sensitivity limits in order to function in this extreme environment. This review covers the current body of knowledge on the visual system of one of the most abundant and intensely studied groups of mesopelagic fishes: the lanternfish (Myctophidae). We discuss how the plasticity, performance and novelty of its visual adaptations, compared with other deep-sea fishes, might have contributed to the diversity and abundance of this family. This article is part of the themed issue ‘Vision in dim light’. PMID:28193815

  14. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics

    PubMed Central

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-01-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379

  15. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.

    PubMed

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-09-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.

  16. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.

    2013-03-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  17. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2011-11-15

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  18. Real-time colouring and filtering with graphics shaders

    NASA Astrophysics Data System (ADS)

    Vohl, D.; Fluke, C. J.; Barnes, D. G.; Hassan, A. H.

    2017-11-01

    Despite the popularity of the Graphics Processing Unit (GPU) for general purpose computing, one should not forget about the practicality of the GPU for fast scientific visualization. As astronomers have increasing access to three-dimensional (3D) data from instruments and facilities like integral field units and radio interferometers, visualization techniques such as volume rendering offer means to quickly explore spectral cubes as a whole. As most 3D visualization techniques have been developed in fields of research like medical imaging and fluid dynamics, many transfer functions are not optimal for astronomical data. We demonstrate how transfer functions and graphics shaders can be exploited to provide new astronomy-specific explorative colouring methods. We present 12 shaders, including four novel transfer functions specifically designed to produce intuitive and informative 3D visualizations of spectral cube data. We compare their utility to classic colour mapping. The remaining shaders highlight how common computation like filtering, smoothing and line ratio algorithms can be integrated as part of the graphics pipeline. We discuss how this can be achieved by utilizing the parallelism of modern GPUs along with a shading language, letting astronomers apply these new techniques at interactive frame rates. All shaders investigated in this work are included in the open source software shwirl (Vohl 2017).

  19. Facial decoding in schizophrenia is underpinned by basic visual processing impairments.

    PubMed

    Belge, Jan-Baptist; Maurage, Pierre; Mangelinckx, Camille; Leleux, Dominique; Delatte, Benoît; Constant, Eric

    2017-09-01

    Schizophrenia is associated with a strong deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specific for emotions or due to a more general impairment for any type of facial processing. This study was designed to clarify this issue. Thirty patients suffering from schizophrenia and 30 matched healthy controls performed several tasks evaluating the recognition of both changeable (i.e. eyes orientation and emotions) and stable (i.e. gender, age) facial characteristics. Accuracy and reaction times were recorded. Schizophrenic patients presented a performance deficit (accuracy and reaction times) in the perception of both changeable and stable aspects of faces, without any specific deficit for emotional decoding. Our results demonstrate a generalized face recognition deficit in schizophrenic patients, probably caused by a perceptual deficit in basic visual processing. It seems that the deficit in the decoding of emotional facial expression (EFE) is not a specific deficit of emotion processing, but is at least partly related to a generalized perceptual deficit in lower-level perceptual processing, occurring before the stage of emotion processing, and underlying more complex cognitive dysfunctions. These findings should encourage future investigations to explore the neurophysiologic background of these generalized perceptual deficits, and stimulate a clinical approach focusing on more basic visual processing. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  20. Generalized plasma dispersion function: One-solve-all treatment, visualizations, and application to Landau damping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Hua-Sheng

    2013-09-15

    A unified, fast, and effective approach is developed for numerical calculation of the well-known plasma dispersion function with extensions from Maxwellian distribution to almost arbitrary distribution functions, such as the δ, flat top, triangular, κ or Lorentzian, slowing down, and incomplete Maxwellian distributions. The singularity and analytic continuation problems are also solved generally. Given that the usual conclusion γ∝∂f{sub 0}/∂v is only a rough approximation when discussing the distribution function effects on Landau damping, this approach provides a useful tool for rigorous calculations of the linear wave and instability properties of plasma for general distribution functions. The results are alsomore » verified via a linear initial value simulation approach. Intuitive visualizations of the generalized plasma dispersion function are also provided.« less

  1. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  2. A probabilistic model for analysing the effect of performance levels on visual behaviour patterns of young sailors in simulated navigation.

    PubMed

    Manzanares, Aarón; Menayo, Ruperto; Segado, Francisco; Salmerón, Diego; Cano, Juan Antonio

    2015-01-01

    The visual behaviour is a determining factor in sailing due to the influence of the environmental conditions. The aim of this research was to determine the visual behaviour pattern in sailors with different practice time in one star race, applying a probabilistic model based on Markov chains. The sample of this study consisted of 20 sailors, distributed in two groups, top ranking (n = 10) and bottom ranking (n = 10), all of them competed in the Optimist Class. An automated system of measurement, which integrates the VSail-Trainer sail simulator and the Eye Tracking System(TM) was used. The variables under consideration were the sequence of fixations and the fixation recurrence time performed on each location by the sailors. The event consisted of one of simulated regatta start, with stable conditions of wind, competitor and sea. Results show that top ranking sailors perform a low recurrence time on relevant locations and higher on irrelevant locations while bottom ranking sailors make a low recurrence time in most of the locations. The visual pattern performed by bottom ranking sailors is focused around two visual pivots, which does not happen in the top ranking sailor's pattern. In conclusion, the Markov chains analysis has allowed knowing the visual behaviour pattern of the top and bottom ranking sailors and its comparison.

  3. On the learning difficulty of visual and auditory modal concepts: Evidence for a single processing system.

    PubMed

    Vigo, Ronaldo; Doan, Karina-Mikayla C; Doan, Charles A; Pinegar, Shannon

    2018-02-01

    The logic operators (e.g., "and," "or," "if, then") play a fundamental role in concept formation, syntactic construction, semantic expression, and deductive reasoning. In spite of this very general and basic role, there are relatively few studies in the literature that focus on their conceptual nature. In the current investigation, we examine, for the first time, the learning difficulty experienced by observers in classifying members belonging to these primitive "modal concepts" instantiated with sets of acoustic and visual stimuli. We report results from two categorization experiments that suggest the acquisition of acoustic and visual modal concepts is achieved by the same general cognitive mechanism. Additionally, we attempt to account for these results with two models of concept learning difficulty: the generalized invariance structure theory model (Vigo in Cognition 129(1):138-162, 2013, Mathematical principles of human conceptual behavior, Routledge, New York, 2014) and the generalized context model (Nosofsky in J Exp Psychol Learn Mem Cogn 10(1):104-114, 1984, J Exp Psychol 115(1):39-57, 1986).

  4. Ultrasound visual feedback treatment and practice variability for residual speech sound errors

    PubMed Central

    Preston, Jonathan L.; McCabe, Patricia; Rivera-Campos, Ahmed; Whittle, Jessica L.; Landry, Erik; Maas, Edwin

    2014-01-01

    Purpose The goals were to (1) test the efficacy of a motor-learning based treatment that includes ultrasound visual feedback for individuals with residual speech sound errors, and (2) explore whether the addition of prosodic cueing facilitates speech sound learning. Method A multiple baseline single subject design was used, replicated across 8 participants. For each participant, one sound context was treated with ultrasound plus prosodic cueing for 7 sessions, and another sound context was treated with ultrasound but without prosodic cueing for 7 sessions. Sessions included ultrasound visual feedback as well as non-ultrasound treatment. Word-level probes assessing untreated words were used to evaluate retention and generalization. Results For most participants, increases in accuracy of target sound contexts at the word level were observed with the treatment program regardless of whether prosodic cueing was included. Generalization between onset singletons and clusters was observed, as well as generalization to sentence-level accuracy. There was evidence of retention during post-treatment probes, including at a two-month follow-up. Conclusions A motor-based treatment program that includes ultrasound visual feedback can facilitate learning of speech sounds in individuals with residual speech sound errors. PMID:25087938

  5. The WebCam vs. the Particle Beam: A CRaTER Visualization of the Effects of Radiation

    NASA Astrophysics Data System (ADS)

    Case, A. W.; Gross, N. A.; Spence, H. E.

    2008-12-01

    The term "radiation" can cause significant anxiety to a general audience in part because of the associated health risks, but also because of lack of a conceptual framework about the nature of radiation. A visual depiction of radiation may go a long way towards providing just such a framework. The CRaTER Team had an opportunity to create just such a video. The Cosmic Ray Telescope for the Effects of Radiation (CRaTER) is a radiation instrument that will fly on the Lunar Reconnaissance Orbiter (LRO) and is designed to determine the effects of energetic particles on living tissue. In order to calibrate CRaTER and characterize its reaction to various radiation environments, the CRaTER team has used particle beam facilities include the Proton Radiation Therapy Facility at Massachusetts General Hospital (MGH). During one of the sessions at MGH, the team placed an off the shelf web camera into the beam and recorded the visual effects. This video recording was used as the basis for an edited video describing what was done and the results. The hope is that this video will provide a general audience with a visual framework for the nature and effects of radiation

  6. An Issue of Learning: The Effect of Visual Split Attention in Classes for Deaf and Hard of Hearing Students

    ERIC Educational Resources Information Center

    Mather, Susan M.; Clark, M. Diane

    2012-01-01

    One of the ongoing challenges teachers of students who are deaf or hard of hearing face is managing the visual split attention implicit in multimedia learning. When a teacher presents various types of visual information at the same time, visual learners have no choice but to divide their attention among those materials and the teacher and…

  7. PSQM-based RR and NR video quality metrics

    NASA Astrophysics Data System (ADS)

    Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu

    2003-06-01

    This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.

  8. A systematic review: the influence of real time feedback on wheelchair propulsion biomechanics.

    PubMed

    Symonds, Andrew; Barbareschi, Giulia; Taylor, Stephen; Holloway, Catherine

    2018-01-01

    Clinical guidelines recommend that, in order to minimize upper limb injury risk, wheelchair users adopt a semi-circular pattern with a slow cadence and a large push arc. To examine whether real time feedback can be used to influence manual wheelchair propulsion biomechanics. Clinical trials and case series comparing the use of real time feedback against no feedback were included. A general review was performed and methodological quality assessed by two independent practitioners using the Downs and Black checklist. The review was completed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) guidelines. Six papers met the inclusion criteria. Selected studies involved 123 participants and analysed the effect of visual and, in one case, haptic feedback. Across the studies it was shown that participants were able to achieve significant changes in propulsion biomechanics, when provided with real time feedback. However, the effect of targeting a single propulsion variable might lead to unwanted alterations in other parameters. Methodological assessment identified weaknesses in external validity. Visual feedback could be used to consistently increase push arc and decrease push rate, and may be the best focus for feedback training. Further investigation is required to assess such intervention during outdoor propulsion. Implications for Rehabilitation Upper limb pain and injuries are common secondary disorders that negatively affect wheelchair users' physical activity and quality of life. Clinical guidelines suggest that manual wheelchair users should aim to propel with a semi-circular pattern with low a push rate and large push arc in the range in order to minimise upper limbs' loading. Real time visual and haptic feedback are effective tools for improving propulsion biomechanics in both complete novices and experienced manual wheelchair users.

  9. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing

    PubMed Central

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; LeCun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons. PMID:22518097

  10. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing.

    PubMed

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; Lecun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.

  11. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  12. Selective visual scaling of time-scale processes facilitates broadband learning of isometric force frequency tracking.

    PubMed

    King, Adam C; Newell, Karl M

    2015-10-01

    The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.

  13. Effects of visual grading on northern red oak (Quercus rubra L.) seedlings planted in two shelterwood stands on the Cumberland Plateau of Tennessee, USA

    Treesearch

    Stacy Clark; Scott Schlarbaum; Callie Schweitzer

    2015-01-01

    Artificial regeneration of oak has been generally unsuccessful in maintaining the oak component in productive upland forests of eastern North America. We tested visual grading effects on quality-grown northern red oak (Quercus rubra) seedlings planted in two submesic stands on the Cumberland Plateau escarpment of Tennessee, USA. Seedlings were grown for one year using...

  14. Relationship between the Short-Term Visual Memory and IQ in the Right-and Left-Handed Subjects Trained in Different Educational Programs: I-General Assessment

    ERIC Educational Resources Information Center

    Yilmaz, Yavuz; Yetkin, Yalçin

    2014-01-01

    The relationship between mean intelligence quotient (IQ), hand preferences and visual memory (VM) were investigated on (N = 612) males and females students trained in different educational programs in viewpoint of laterality. IQ was assessed by cattle's culture Fair intelligence test-A (CCFIT-A). The laterality of the one side of the body was…

  15. Evolutionary relevance facilitates visual information processing.

    PubMed

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  16. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  17. Baby Arithmetic: One Object Plus One Tone

    ERIC Educational Resources Information Center

    Kobayashi, Tessei; Hiraki, Kazuo; Mugitani, Ryoko; Hasegawa, Toshikazu

    2004-01-01

    Recent studies using a violation-of-expectation task suggest that preverbal infants are capable of recognizing basic arithmetical operations involving visual objects. There is still debate, however, over whether their performance is based on any expectation of the arithmetical operations, or on a general perceptual tendency to prefer visually…

  18. Mountain Plains Learning Experience Guide: Marketing. Course: Visual Merchandising.

    ERIC Educational Resources Information Center

    Preston, T.; Egan, B.

    One of thirteen individualized courses included in a marketing curriculum, this course covers the steps to be followed in planning, constructing, and evaluating the effectiveness of merchandise displays. The course is comprised of one unit, General Merchandise Displays. The unit begins with a Unit Learning Experience Guide that gives directions…

  19. Real-Time Strategy Video Game Experience and Visual Perceptual Learning.

    PubMed

    Kim, Yong-Hwan; Kang, Dong-Wha; Kim, Dongho; Kim, Hye-Jin; Sasaki, Yuka; Watanabe, Takeo

    2015-07-22

    Visual perceptual learning (VPL) is defined as long-term improvement in performance on a visual-perception task after visual experiences or training. Early studies have found that VPL is highly specific for the trained feature and location, suggesting that VPL is associated with changes in the early visual cortex. However, the generality of visual skills enhancement attributable to action video-game experience suggests that VPL can result from improvement in higher cognitive skills. If so, experience in real-time strategy (RTS) video-game play, which may heavily involve cognitive skills, may also facilitate VPL. To test this hypothesis, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and elucidated underlying structural and functional neural mechanisms. Healthy young human subjects underwent six training sessions on a texture discrimination task. Diffusion-tensor and functional magnetic resonance imaging were performed before and after training. VGPs performed better than NVGPs in the early phase of training. White-matter connectivity between the right external capsule and visual cortex and neuronal activity in the right inferior frontal gyrus (IFG) and anterior cingulate cortex (ACC) were greater in VGPs than NVGPs and were significantly correlated with RTS video-game experience. In both VGPs and NVGPs, there was task-related neuronal activity in the right IFG, ACC, and striatum, which was strengthened after training. These results indicate that RTS video-game experience, associated with changes in higher-order cognitive functions and connectivity between visual and cognitive areas, facilitates VPL in early phases of training. The results support the hypothesis that VPL can occur without involvement of only visual areas. Significance statement: Although early studies found that visual perceptual learning (VPL) is associated with involvement of the visual cortex, generality of visual skills enhancement by action video-game experience suggests that higher-order cognition may be involved in VPL. If so, real-time strategy (RTS) video-game experience may facilitate VPL as a result of heavy involvement of cognitive skills. Here, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and investigated the underlying neural mechanisms. VGPs showed better performance in the early phase of training on the texture discrimination task and greater level of neuronal activity in cognitive areas and structural connectivity between visual and cognitive areas than NVGPs. These results support the hypothesis that VPL can occur beyond the visual cortex. Copyright © 2015 the authors 0270-6474/15/3510485-08$15.00/0.

  20. Chirp-modulated visual evoked potential as a generalization of steady state visual evoked potential

    NASA Astrophysics Data System (ADS)

    Tu, Tao; Xin, Yi; Gao, Xiaorong; Gao, Shangkai

    2012-02-01

    Visual evoked potentials (VEPs) are of great concern in cognitive and clinical neuroscience as well as in the recent research field of brain-computer interfaces (BCIs). In this study, a chirp-modulated stimulation was employed to serve as a novel type of visual stimulus. Based on our empirical study, the chirp stimuli visual evoked potential (Chirp-VEP) preserved frequency features of the chirp stimulus analogous to the steady state evoked potential (SSVEP), and therefore it can be regarded as a generalization of SSVEP. Specifically, we first investigated the characteristics of the Chirp-VEP in the time-frequency domain and the fractional domain via fractional Fourier transform. We also proposed a group delay technique to derive the apparent latency from Chirp-VEP. Results on EEG data showed that our approach outperformed the traditional SSVEP-based method in efficiency and ease of apparent latency estimation. For the recruited six subjects, the average apparent latencies ranged from 100 to 130 ms. Finally, we implemented a BCI system with six targets to validate the feasibility of Chirp-VEP as a potential candidate in the field of BCIs.

  1. IN31A-1734 Development and Evaluation of a Gridded CrIS/ATMS Visualization for Operational Forecasting

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley; Smith, Nadia; Dostalek, Jack; Stevens, Eric; Nelson, Kristine; Weisz, Elisabeth; Berndt, Emily; Line, Bill; Barnet, Chris; Gambacorta, Antonia; hide

    2016-01-01

    A collaborative effort between SPoRT, CIMSS, CIRA, GINA, and NOAA has produced a unique gridded visualization of real-time CrIS/ATMS sounding products. This product uses the NUCAPS retrieval algorithm and polar2grid software to generate plan-view and cross-section visualization for forecast challenges associated with cold air aloft and convective potential. Forecasters at select partner offices have been able to view the Gridded NUCAPS products in AWIPS alongside other operational data products with generally favorable feedback.

  2. Effects of Passion Flower Extract, as an Add-On Treatment to Sertraline, on Reaction Time in Patients ‎with Generalized Anxiety Disorder: A Double-Blind Placebo-Controlled Study

    PubMed Central

    Nojoumi, Mandana; Ghaeli, Padideh; Salimi, Samrand; Sharifi, Ali; Raisi, Firoozeh

    2016-01-01

    Objective: Because of functional impairment caused by generalized anxiety disorder and due to cognitive side ‎effects of many anti-anxiety agents, in this study we aimed to evaluate the influence of Passion ‎flower standardized extract on reaction time in patients with generalized anxiety disorder.‎ Method: Thirty patients aged 18 to 50 years of age, who were diagnosed with generalized anxiety disorder and ‎fulfilled the study criteria, entered this double-blind placebo-controlled study. Reaction time was ‎measured at baseline and after one month of treatment using computerized software. Correct ‎responses, omission and substitution errors and the mean time of correct responses (reaction time) in ‎both visual and auditory tests were collected. The analysis was performed between the two groups ‎and within each group utilizing SPSS PASW- statics, Version 18. P-value less than 0.05 was ‎considered statistically significant.‎ Results: All the participants were initiated on Sertraline 50 mg/day, and the dosage was increased to 100 ‎mg / day after two weeks. Fourteen patients received Pasipy (Passion Flower) 15 drops three times ‎daily and 16 received placebo concurrently. Inter-group comparison proved no significant difference ‎in any of the test items between assortments while a significant decline was observed in auditory ‎omission errors in passion flower group after on month of treatment using intra-group analysis.‎‎ Conclusion: This study noted that passion flower might be suitable as an add-on in the treatment of generalized ‎anxiety disorder with low side effects. Further studies with longer duration are recommended to ‎confirm the results of this study.‎ PMID:27928252

  3. Effects of Passion Flower Extract, as an Add-On Treatment to Sertraline, on Reaction Time in Patients ‎with Generalized Anxiety Disorder: A Double-Blind Placebo-Controlled Study.

    PubMed

    Nojoumi, Mandana; Ghaeli, Padideh; Salimi, Samrand; Sharifi, Ali; Raisi, Firoozeh

    2016-07-01

    Objective: Because of functional impairment caused by generalized anxiety disorder and due to cognitive side ‎effects of many anti-anxiety agents, in this study we aimed to evaluate the influence of Passion ‎flower standardized extract on reaction time in patients with generalized anxiety disorder.‎ Method: Thirty patients aged 18 to 50 years of age, who were diagnosed with generalized anxiety disorder and ‎fulfilled the study criteria, entered this double-blind placebo-controlled study. Reaction time was ‎measured at baseline and after one month of treatment using computerized software. Correct ‎responses, omission and substitution errors and the mean time of correct responses (reaction time) in ‎both visual and auditory tests were collected. The analysis was performed between the two groups ‎and within each group utilizing SPSS PASW- statics, Version 18. P-value less than 0.05 was ‎considered statistically significant.‎ Results: All the participants were initiated on Sertraline 50 mg/day, and the dosage was increased to 100 ‎mg / day after two weeks. Fourteen patients received Pasipy (Passion Flower) 15 drops three times ‎daily and 16 received placebo concurrently. Inter-group comparison proved no significant difference ‎in any of the test items between assortments while a significant decline was observed in auditory ‎omission errors in passion flower group after on month of treatment using intra-group analysis.‎‎ Conclusion: This study noted that passion flower might be suitable as an add-on in the treatment of generalized ‎anxiety disorder with low side effects. Further studies with longer duration are recommended to ‎confirm the results of this study.‎.

  4. Interference with olfactory memory by visual and verbal tasks.

    PubMed

    Annett, J M; Cook, N M; Leslie, J C

    1995-06-01

    It has been claimed that olfactory memory is distinct from memory in other modalities. This study investigated the effectiveness of visual and verbal tasks in interfering with olfactory memory and included methodological changes from other recent studies. Subjects were allocated to one of four experimental conditions involving interference tasks [no interference task; visual task; verbal task; visual-plus-verbal task] and presented 15 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Recognition and recall performance both showed effects of interference of visual and verbal tasks but there was no effect for time of testing. While the results may be accommodated within a dual coding framework, further work is indicated to resolve theoretical issues relating to task complexity.

  5. Orienting attention in visual space by nociceptive stimuli: investigation with a temporal order judgment task based on the adaptive PSI method.

    PubMed

    Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry

    2017-07-01

    Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.

  6. Enhancing online timeline visualizations with events and images

    NASA Astrophysics Data System (ADS)

    Pandya, Abhishek; Mulye, Aniket; Teoh, Soon Tee

    2011-01-01

    The use of timeline to visualize time-series data is one of the most intuitive and commonly used methods, and is used for widely-used applications such as stock market data visualization, and tracking of poll data of election candidates over time. While useful, these timeline visualizations are lacking in contextual information of events which are related or cause changes in the data. We have developed a system that enhances timeline visualization with display of relevant news events and their corresponding images, so that users can not only see the changes in the data, but also understand the reasons behind the changes. We have also conducted a user study to test the effectiveness of our ideas.

  7. Investigation of visual fatigue/discomfort generated by S3D video using eye-tracking data

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2013-03-01

    Stereoscopic 3D is undoubtedly one of the most attractive content. It has been deployed intensively during the last decade through movies and games. Among the advantages of 3D are the strong involvement of viewers and the increased feeling of presence. However, the sanitary e ects that can be generated by 3D are still not precisely known. For example, visual fatigue and visual discomfort are among symptoms that an observer may feel. In this paper, we propose an investigation of visual fatigue generated by 3D video watching, with the help of eye-tracking. From one side, a questionnaire, with the most frequent symptoms linked with 3D, is used in order to measure their variation over time. From the other side, visual characteristics such as pupil diameter, eye movements ( xations and saccades) and eye blinking have been explored thanks to data provided by the eye-tracker. The statistical analysis showed an important link between blinking duration and number of saccades with visual fatigue while pupil diameter and xations are not precise enough and are highly dependent on content. Finally, time and content play an important role in the growth of visual fatigue due to 3D watching.

  8. Phenomenological reliving and visual imagery during autobiographical recall in Alzheimer’s disease

    PubMed Central

    El Haj, Mohamad; Kapogiannis, Dimitrios; Antoine, Pascal

    2016-01-01

    Multiple studies have shown compromise of autobiographical memory and phenomenological reliving in Alzheimer’s disease (AD). We investigated various phenomenological features of autobiographical memory to determine their relative vulnerability in AD. To this aim, participants with early AD and cognitively normal older adult controls were asked to retrieve an autobiographical event and rate on a 5-point scale metacognitive judgments (i.e., reliving, back in time, remembering, and realness), component processes (i.e., visual imagery, auditory imagery, language, and emotion), narrative properties (i.e., rehearsal and importance), and spatiotemporal specificity (i.e., spatial details and temporal details). AD participants showed lower general autobiographical recall than controls, and poorer reliving, travel in time, remembering, realness, visual imagery, auditory imagery, language, rehearsal, and spatial detail – a decrease that was especially pronounced for visual imagery. Yet, AD participants showed high rating for emotion and importance. Early AD seems to compromise many phenomenological features, especially visual imagery, but also seems to preserve some other features. PMID:27003216

  9. Phenomenological Reliving and Visual Imagery During Autobiographical Recall in Alzheimer's Disease.

    PubMed

    El Haj, Mohamad; Kapogiannis, Dimitrios; Antoine, Pascal

    2016-03-16

    Multiple studies have shown compromise of autobiographical memory and phenomenological reliving in Alzheimer's disease (AD). We investigated various phenomenological features of autobiographical memory to determine their relative vulnerability in AD. To this aim, participants with early AD and cognitively normal older adult controls were asked to retrieve an autobiographical event and rate on a five-point scale metacognitive judgments (i.e., reliving, back in time, remembering, and realness), component processes (i.e., visual imagery, auditory imagery, language, and emotion), narrative properties (i.e., rehearsal and importance), and spatiotemporal specificity (i.e., spatial details and temporal details). AD participants showed lower general autobiographical recall than controls, and poorer reliving, travel in time, remembering, realness, visual imagery, auditory imagery, language, rehearsal, and spatial detail-a decrease that was especially pronounced for visual imagery. Yet, AD participants showed high rating for emotion and importance. Early AD seems to compromise many phenomenological features, especially visual imagery, but also seems to preserve some other features.

  10. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.

  11. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  12. "Einstein's Playground": An Interactive Planetarium Show on Special Relativity

    ERIC Educational Resources Information Center

    Sherin, Zachary; Tan, Philip; Fairweather, Heather; Kortemeyer, Gerd

    2017-01-01

    The understanding of many aspects of astronomy is closely linked with relativity and the finite speed of light, yet relativity is generally not discussed in great detail during planetarium shows for the general public. One reason may be the difficulty to visualize these phenomena in a way that is appropriate for planetariums; another may be their…

  13. Hemispheric differences in visual search of simple line arrays.

    PubMed

    Polich, J; DeFrancesco, D P; Garon, J F; Cohen, W

    1990-01-01

    The effects of perceptual organization on hemispheric visual-information processing were assessed with stimulus arrays composed of short lines arranged in columns. A visual-search task was employed in which subjects judged whether all the lines were vertical (same) or whether a single horizontal line was present (different). Stimulus-display organization was manipulated in two experiments by variation of line density, linear organization, and array size. In general, left-visual-field/right-hemisphere presentations demonstrated more rapid and accurate responses when the display was perceived as a whole. Right-visual-field/left-hemisphere superiorities were observed when the display organization coerced assessment of individual array elements because the physical qualities of the stimulus did not effect a gestalt whole. Response times increased somewhat with increases in array size, although these effects interacted with other stimulus variables. Error rates tended to follow the reaction-time patterns. The results suggest that laterality differences in visual search are governed by stimulus properties which contribute to, or inhibit, the perception of a display as a gestalt. The implications of these findings for theoretical interpretations of hemispheric specialization are discussed.

  14. 41 CFR 51-8.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... request. Such copies can take the form of paper copy, audio-visual materials, or machine readable materials (e.g., magnetic tape or disk), among others. (g) The term search includes all time spent looking... time spent resolving general legal or policy issues regarding the application of exemptions. [54 FR...

  15. 41 CFR 51-8.3 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... request. Such copies can take the form of paper copy, audio-visual materials, or machine readable materials (e.g., magnetic tape or disk), among others. (g) The term search includes all time spent looking... time spent resolving general legal or policy issues regarding the application of exemptions. [54 FR...

  16. [Slowing down the flow of facial information enhances facial scanning in children with autism spectrum disorders: A pilot eye tracking study].

    PubMed

    Charrier, A; Tardif, C; Gepner, B

    2017-02-01

    Face and gaze avoidance are among the most characteristic and salient symptoms of autism spectrum disorders (ASD). Studies using eye tracking highlighted early and lifelong ASD-specific abnormalities in attention to face such as decreased attention to internal facial features. These specificities could be partly explained by disorders in the perception and integration of rapid and complex information such as that conveyed by facial movements and more broadly by biological and physical environment. Therefore, we wish to test whether slowing down facial dynamics may improve the way children with ASD attend to a face. We used an eye tracking method to examine gaze patterns of children with ASD aged 3 to 8 (n=23) and TD controls (n=29) while viewing the face of a speaker telling a story. The story was divided into 6 sequences that were randomly displayed at 3 different speeds, i.e. a real-time speed (RT), a slow speed (S70=70% of RT speed), a very slow speed (S50=50% of RT speed). S70 and S50 were displayed thanks to software called Logiral™, aimed at slowing down visual and auditory stimuli simultaneously and without tone distortion. The visual scene was divided into four regions of interest (ROI): eyes region; mouth region; whole face region; outside the face region. The total time, number and mean duration of visual fixations on the whole visual scene and the four ROI were measured between and within the two groups. Compared to TD children, children with ASD spent significantly less time attending to the visual scenes and, when they looked at the scene, they spent less time scanning the speaker's face in general and her mouth in particular, and more time looking outside facial area. Within the ASD group mean duration of fixation increased on the whole scene and particularly on the mouth area, in R50 compared to RT. Children with mild autism spent more time looking at the face than the two other groups of ASD children, and spent more time attending to the face and mouth as well as longer mean duration of visual fixation on mouth and eyes, at slow speeds (S50 and/or S70) than at RT one. Slowing down facial dynamics enhances looking time on face, and particularly on mouth and/or eyes, in a group of 23 children with ASD and particularly in a small subgroup with mild autism. Given the crucial role of reading the eyes for emotional processing and that of lip-reading for language processing, our present result and other converging ones could pave the way for novel socio-emotional and verbal rehabilitation methods for autistic population. Further studies should investigate whether increased attention to face and particularly eyes and mouth is correlated to emotional/social and/or verbal/language improvements. Copyright © 2016 L'Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  17. Effect of Cognitive Demand on Functional Visual Field Performance in Senior Drivers with Glaucoma.

    PubMed

    Gangeddula, Viswa; Ranchet, Maud; Akinwuntan, Abiodun E; Bollinger, Kathryn; Devos, Hannes

    2017-01-01

    Purpose: To investigate the effect of cognitive demand on functional visual field performance in drivers with glaucoma. Method: This study included 20 drivers with open-angle glaucoma and 13 age- and sex-matched controls. Visual field performance was evaluated under different degrees of cognitive demand: a static visual field condition (C1), dynamic visual field condition (C2), and dynamic visual field condition with active driving (C3) using an interactive, desktop driving simulator. The number of correct responses (accuracy) and response times on the visual field task were compared between groups and between conditions using Kruskal-Wallis tests. General linear models were employed to compare cognitive workload, recorded in real-time through pupillometry, between groups and conditions. Results: Adding cognitive demand (C2 and C3) to the static visual field test (C1) adversely affected accuracy and response times, in both groups ( p < 0.05). However, drivers with glaucoma performed worse than did control drivers when the static condition changed to a dynamic condition [C2 vs. C1 accuracy; glaucoma: median difference (Q1-Q3) 3 (2-6.50) vs. 2 (0.50-2.50); p = 0.05] and to a dynamic condition with active driving [C3 vs. C1 accuracy; glaucoma: 2 (2-6) vs. 1 (0.50-2); p = 0.02]. Overall, drivers with glaucoma exhibited greater cognitive workload than controls ( p = 0.02). Conclusion: Cognitive demand disproportionately affects functional visual field performance in drivers with glaucoma. Our results may inform the development of a performance-based visual field test for drivers with glaucoma.

  18. Circadian timed episodic-like memory - a bee knows what to do when, and also where.

    PubMed

    Pahl, Mario; Zhu, Hong; Pix, Waltraud; Tautz, Juergen; Zhang, Shaowu

    2007-10-01

    This study investigates how the colour, shape and location of patterns could be memorized within a time frame. Bees were trained to visit two Y-mazes, one of which presented yellow vertical (rewarded) versus horizontal (non-rewarded) gratings at one site in the morning, while another presented blue horizontal (rewarded) versus vertical (non-rewarded) gratings at another site in the afternoon. The bees could perform well in the learning tests and various transfer tests, in which (i) all contextual cues from the learning test were present; (ii) the colour cues of the visual patterns were removed, but the location cue, the orientation of the visual patterns and the temporal cue still existed; (iii) the location cue was removed, but other contextual cues, i.e. the colour and orientation of the visual patterns and the temporal cue still existed; (iv) the location cue and the orientation cue of the visual patterns were removed, but the colour cue and temporal cue still existed; (v) the location cue, and the colour cue of the visual patterns were removed, but the orientation cue and the temporal cue still existed. The results reveal that the honeybee can recall the memory of the correct visual patterns by using spatial and/or temporal information. The relative importance of different contextual cues is compared and discussed. The bees' ability to integrate elements of circadian time, place and visual stimuli is akin to episodic-like memory; we have therefore named this kind of memory circadian timed episodic-like memory.

  19. To wear or not to wear: current contact lens use in the Royal Canadian Mounted Police.

    PubMed

    Wells, G A; Brown, J J; Casson, E J; Easterbrook, M; Trottier, A J

    1997-04-01

    The Canadian Ophthalmological Society was asked by the Royal Canadian Mounted Police (RCMP) and the Canadian Human Rights Commission to render an opinion on the acceptability of contact lenses as a reasonable accommodation to the uncorrected visual acuity standard. Survey by mailed questionnaire. Canada. All RCMP general duty constables with a visual acuity code of V3, V4, V5 or V6 (n = 348) and a random sample of approximately 25% of the constables with an acuity code of V2 (n = 809). Of the 1040 questionnaires returned, 1037 were usable (final response rate 89.6%). Of the 1037 respondents 316 were in the V3 to V6 group and 721 were in the V2 group. Reported frequency of problems with spectacles or contact lenses, weighted according to sampling fraction. A total of 934 respondents indicated that they used some form of visual acuity correction while on duty; of the 934, 360 reported that they wore contact lenses at least some of the time. Approximately 75% of the spectacle wearers reported having to remove their spectacles because of fogging or rain. Although contact lens dislogement or fogging (21.2%) was less frequent than spectacle dislogement (59.2%), 35.4% of the contact lens wearers reported that they were unable to wear their lenses because of irritation on at least one occasion in the previous 2 years; the median length of time was 3.14 days. When the additional amount of time due to other causes is factored in, it is clear that contact lens users wear spectacles for substantial periods while on duty. Not only are RCMP general duty constables who usually wear contact lenses likely to have to wear spectacles at some time, but it is also possible that they will have to remove their spectacles and function in an uncorrected state in critical situations. Thus, altering the current standard to allow the use of contact lenses as a reasonable accommodation would not ensure effective and safe job performance.

  20. Exploratory visualization of earth science data in a Semantic Web context

    NASA Astrophysics Data System (ADS)

    Ma, X.; Fox, P. A.

    2012-12-01

    Earth science data are increasingly unlocked from their local 'safes' and shared online with the global science community as well as the average citizen. The European Union (EU)-funded project OneGeology-Europe (1G-E, www.onegeology-europe.eu) is a typical project that promotes works in that direction. The 1G-E web portal provides easy access to distributed geological data resources across participating EU member states. Similar projects can also be found in other countries or regions, such as the geoscience information network USGIN (www.usgin.org) in United States, the groundwater information network GIN-RIES (www.gw-info.net) in Canada and the earth science infrastructure AuScope (www.auscope.org.au) in Australia. While data are increasingly made available online, we currently face a shortage of tools and services that support information and knowledge discovery with such data. One reason is that earth science data are recorded in professional language and terms, and people without background knowledge cannot understand their meanings well. The Semantic Web provides a new context to help computers as well as users to better understand meanings of data and conduct applications. In this study we aim to chain up Semantic Web technologies (e.g., vocabularies/ontologies and reasoning), data visualization (e.g., an animation underpinned by an ontology) and online earth science data (e.g., available as Web Map Service) to develop functions for information and knowledge discovery. We carried out a case study with data of the 1G-E project. We set up an ontology of geological time scale using the encoding languages of SKOS (Simple Knowledge Organization System) and OWL (Web Ontology Language) from W3C (World Wide Web Consortium, www.w3.org). Then we developed a Flash animation of geological time scale by using the ActionScript language. The animation is underpinned by the ontology and the interrelationships between concepts of geological time scale are visualized in the animation. We linked the animation and the ontology to the online geological data of 1G-E project and developed interactive applications. The animation was used to show legends of rock age layers in geological maps dynamically. In turn, these legends were used as control panels to filter out and generalize geospatial features of certain rock ages on map layers. We tested the functions with maps of various EU member states. As a part of the initial results, legends for rock age layers of EU individual national maps were generated respectively, and the functions for filtering and generalization were examined with the map of United Kingdom. Though new challenges are rising in the tests, like those caused by synonyms (e.g., 'Lower Cambrian' and 'Terreneuvian'), the initial results achieved the designed goals of information and knowledge discovery by using the ontology-underpinned animation. This study shows that (1) visualization lowers the barrier of ontologies, (2) integrating ontologies and visualization adds value to online earth science data services, and (3) exploratory visualization supports the procedure of data processing as well as the display of results.

  1. Face-specific and domain-general visual processing deficits in children with developmental prosopagnosia.

    PubMed

    Dalrymple, Kirsten A; Elison, Jed T; Duchaine, Brad

    2017-02-01

    Evidence suggests that face and object recognition depend on distinct neural circuitry within the visual system. Work with adults with developmental prosopagnosia (DP) demonstrates that some individuals have preserved object recognition despite severe face recognition deficits. This face selectivity in adults with DP indicates that face- and object-processing systems can develop independently, but it is unclear at what point in development these mechanisms are separable. Determining when individuals with DP first show dissociations between faces and objects is one means to address this question. In the current study, we investigated face and object processing in six children with DP (5-12-years-old). Each child was assessed with one face perception test, two different face memory tests, and two object memory tests that were matched to the face memory tests in format and difficulty. Scores from the DP children on the matched face and object tasks were compared to within-subject data from age-matched controls. Four of the six DP children, including the 5-year-old, showed evidence of face-specific deficits, while one child appeared to have more general visual-processing deficits. The remaining child had inconsistent results. The presence of face-specific deficits in children with DP suggests that face and object perception depend on dissociable processes in childhood.

  2. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    PubMed Central

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  3. Effects of auditory and visual modalities in recall of words.

    PubMed

    Gadzella, B M; Whitehead, D A

    1975-02-01

    Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.

  4. Alternative Audio Solution to Enhance Immersion in Deployable Synthetic Environments

    DTIC Science & Technology

    2003-09-01

    sense of presence. For example, the musical score of a movie increases the viewers’ emotional involvement in a cinematic feature. The character...photo-realistic way can make mental immersion difficult, because any flaw in the realism will spoil the effect [SHER 03].” One way to overcome spoiling...the visual realism is to reinforce visual clues with those from other modalities. 3. Aural Modality a. General Aural displays can be

  5. False memory for context and true memory for context similarly activate the parahippocampal cortex.

    PubMed

    Karanian, Jessica M; Slotnick, Scott D

    2017-06-01

    The role of the parahippocampal cortex is currently a topic of debate. One view posits that the parahippocampal cortex specifically processes spatial layouts and sensory details (i.e., the visual-spatial processing view). In contrast, the other view posits that the parahippocampal cortex more generally processes spatial and non-spatial contexts (i.e., the general contextual processing view). A large number of studies have found that true memories activate the parahippocampal cortex to a greater degree than false memories, which would appear to support the visual-spatial processing view as true memories are typically associated with greater visual-spatial detail than false memories. However, in previous studies, contextual details were also greater for true memories than false memories. Thus, such differential activity in the parahippocampal cortex may have reflected differences in contextual processing, which would challenge the visual-spatial processing view. In the present functional magnetic resonance imaging (fMRI) study, we employed a source memory paradigm to investigate the functional role of the parahippocampal cortex during true memory and false memory for contextual information to distinguish between the visual-spatial processing view and the general contextual processing view. During encoding, abstract shapes were presented to the left or right of fixation. During retrieval, old shapes were presented at fixation and participants indicated whether each shape was previously on the "left" or "right" followed by an "unsure", "sure", or "very sure" confidence rating. The conjunction of confident true memories for context and confident false memories for context produced activity in the parahippocampal cortex, which indicates that this region is associated with contextual processing. Furthermore, the direct contrast of true memory and false memory produced activity in the visual cortex but did not produce activity in the parahippocampal cortex. The present evidence suggests that the parahippocampal cortex is associated with general contextual processing rather than only being associated with visual-spatial processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. The Flowering of Identity: Tracing the History of Cuba through the Visual Arts

    ERIC Educational Resources Information Center

    Smith, Noel

    2007-01-01

    Teaching history through the visual arts is one way of bringing the past into the present. In Cuba, the visual arts and architecture have reflected the country's "flowering of identity" through time, as a multi-ethnic population has grown to recognize its own distinct history, values and attributes, and Cuban artists have portrayed the…

  7. Two Speed Factors of Visual Recognition Independently Correlated with Fluid Intelligence

    PubMed Central

    Tachibana, Ryosuke; Namba, Yuri; Noguchi, Yasuki

    2014-01-01

    Growing evidence indicates a moderate but significant relationship between processing speed in visuo-cognitive tasks and general intelligence. On the other hand, findings from neuroscience proposed that the primate visual system consists of two major pathways, the ventral pathway for objects recognition and the dorsal pathway for spatial processing and attentive analysis. Previous studies seeking for visuo-cognitive factors of human intelligence indicated a significant correlation between fluid intelligence and the inspection time (IT), an index for a speed of object recognition performed in the ventral pathway. We thus presently examined a possibility that neural processing speed in the dorsal pathway also represented a factor of intelligence. Specifically, we used the mental rotation (MR) task, a popular psychometric measure for mental speed of spatial processing in the dorsal pathway. We found that the speed of MR was significantly correlated with intelligence scores, while it had no correlation with one’s IT (recognition speed of visual objects). Our results support the new possibility that intelligence could be explained by two types of mental speed, one related to object recognition (IT) and another for manipulation of mental images (MR). PMID:24825574

  8. Personal factors influencing the visual reaction time of pedestrians to detect turn indicators in the presence of Daytime Running Lamps.

    PubMed

    Peña-García, Antonio; de Oña, Rocío; García, Pedro Antonio; de Oña, Juan

    2016-12-01

    Daytime running lamps (DRL) on vehicles have proven to be an effective measure to prevent accidents during the daytime, particularly when pedestrians and cyclists are involved. However, there are negative interactions of DRL with other functions in automotive lighting, such as delays in pedestrians' visual reaction time (VRT) when turn indicators are activated in the presence of DRL. These negative interactions need to be reduced. This work analyses the influence of variables inherent to pedestrians, such as height, gender and visual defects, on the VRT using a classification and regression tree as an exploratory analysis and a generalized linear model to validate the results. Some pedestrian characteristics, such as gender, alone or combined with the DRL colour, and visual defects, were found to have a statistically significant influence on VRT and, hence, on traffic safety. These results and conclusions concerning the interaction between pedestrians and vehicles are presented and discussed. Practitioner Summary: Visual interactions of vehicle daytime running lamps (DRL) with other functions in automotive lighting, such as turn indicators, have an important impact on a vehicle's conspicuity for pedestrians. Depending on several factors inherent to pedestrians, the visual reaction time (VRT) can be remarkably delayed, which has implications in traffic safety.

  9. Retinotopic maps and foveal suppression in the visual cortex of amblyopic adults

    PubMed Central

    Conner, Ian P; Odom, J Vernon; Schwartz, Terry L; Mendola, Janine D

    2007-01-01

    Amblyopia is a developmental visual disorder associated with loss of monocular acuity and sensitivity as well as profound alterations in binocular integration. Abnormal connections in visual cortex are known to underlie this loss, but the extent to which these abnormalities are regionally or retinotopically specific has not been fully determined. This functional magnetic resonance imaging (fMRI) study compared the retinotopic maps in visual cortex produced by each individual eye in 19 adults (7 esotropic strabismics, 6 anisometropes and 6 controls). In our standard viewing condition, the non-tested eye viewed a dichoptic homogeneous mid-level grey stimulus, thereby permitting some degree of binocular interaction. Regions-of-interest analysis was performed for extrafoveal V1, extrafoveal V2 and the foveal representation at the occipital pole. In general, the blood oxygenation level-dependent (BOLD) signal was reduced for the amblyopic eye. At the occipital pole, population receptive fields were shifted to represent more parafoveal locations for the amblyopic eye, compared with the fellow eye, in some subjects. Interestingly, occluding the fellow eye caused an expanded foveal representation for the amblyopic eye in one early–onset strabismic subject with binocular suppression, indicating real-time cortical remapping. In addition, a few subjects actually showed increased activity in parietal and temporal cortex when viewing with the amblyopic eye. We conclude that, even in a heterogeneous population, abnormal early visual experience commonly leads to regionally specific cortical adaptations. PMID:17627994

  10. Study of Auditory, Visual Reaction Time and Glycemic Control (HBA1C) in Chronic Type II Diabetes Mellitus.

    PubMed

    M, Muhil; Sembian, Umapathy; Babitha; N, Ethiya; K, Muthuselvi

    2014-09-01

    Diabetes mellitus is a disease of insulin deficiencyleads to micro and macro vascular disorder. Neuropathy is one of the major complication of chronic uncontrolled Diabetes affecting the Reaction time. To study the correlation between the glycosylated HbA1C and Auditory, visual Reaction time in chronic Type II diabetes (40-60y) of on oral hypoglycemic drugs of>10 y duration in two groups (n-100 in each group , both Males & females) and compared within the study groups and also with the age matched control group (100). HbA1C-Glycosylated HbA1C was measured by Particle enhanced immunoturbidimetric test method. Auditory and visual reaction time (ART, VRT) were measured by PC 1000 Reaction timer for control & study groups i.e. Group-I - Chronic Type II DM for >10 y with HbA1c < 7.0, and Group II - chronic Type-IIDM for >10 y with HbA1c > 7.0 ie impaired glycemic control. Exclusion Criteria- Subjects with Auditory and visual disturbances, alcoholism and smoking. Statistical Analysis - One-way ANOVA. Using SPSS 21 software. Both the groups had prolonged ART and VRT than controls. Among the study group, G-II (DM with HbA1C >7) had increased Auditory & Visual Reaction time than Group I which is statistically significant p-value <0.05. Impairment of sensory motor function of peripheral nervous system is more in chronic diabetic with less glycemic control ie., HbA1C>7 who have shown increased Auditory and Visual Reaction time than chronic DM with HbA1C<7.Severity of Peripheral neuropathy in Type II Diabetics could be due to elevated HbA1C.

  11. General Aviation Flight Test of Advanced Operations Enabled by Synthetic Vision

    NASA Technical Reports Server (NTRS)

    Glaab, Louis J.; Hughhes, Monica F.; Parrish, Russell V.; Takallu, Mohammad A.

    2014-01-01

    A flight test was performed to compare the use of three advanced primary flight and navigation display concepts to a baseline, round-dial concept to assess the potential for advanced operations. The displays were evaluated during visual and instrument approach procedures including an advanced instrument approach resembling a visual airport traffic pattern. Nineteen pilots from three pilot groups, reflecting the diverse piloting skills of the General Aviation pilot population, served as evaluation subjects. The experiment had two thrusts: 1) an examination of the capabilities of low-time (i.e., <400 hours), non-instrument-rated pilots to perform nominal instrument approaches, and 2) an exploration of potential advanced Visual Meteorological Conditions (VMC)-like approaches in Instrument Meteorological Conditions (IMC). Within this context, advanced display concepts are considered to include integrated navigation and primary flight displays with either aircraft attitude flight directors or Highway In The Sky (HITS) guidance with and without a synthetic depiction of the external visuals (i.e., synthetic vision). Relative to the first thrust, the results indicate that using an advanced display concept, as tested herein, low-time, non-instrument-rated pilots can exhibit flight-technical performance, subjective workload and situation awareness ratings as good as or better than high-time Instrument Flight Rules (IFR)-rated pilots using Baseline Round Dials for a nominal IMC approach. For the second thrust, the results indicate advanced VMC-like approaches are feasible in IMC, for all pilot groups tested for only the Synthetic Vision System (SVS) advanced display concept.

  12. Flood Damage and Loss Estimation for Iowa on Web-based Systems using HAZUS

    NASA Astrophysics Data System (ADS)

    Yildirim, E.; Sermet, M. Y.; Demir, I.

    2016-12-01

    Importance of decision support systems for flood emergency response and loss estimation increases with its social and economic impacts. To estimate the damage of the flood, there are several software systems available to researchers and decision makers. HAZUS-MH is one of the most widely used desktop program, developed by FEMA (Federal Emergency Management Agency), to estimate economic loss and social impacts of disasters such as earthquake, hurricane and flooding (riverine and coastal). HAZUS used loss estimation methodology and implements through geographic information system (GIS). HAZUS contains structural, demographic, and vehicle information across United States. Thus, it allows decision makers to understand and predict possible casualties and damage of the floods by running flood simulations through GIS application. However, it doesn't represent real time conditions because of using static data. To close this gap, an overview of a web-based infrastructure coupling HAZUS and real time data provided by IFIS (Iowa Flood Information System) is presented by this research. IFIS is developed by the Iowa Flood Center, and a one-stop web-platform to access community-based flood conditions, forecasts, visualizations, inundation maps and flood-related data, information, and applications. Large volume of real-time observational data from a variety of sensors and remote sensing resources (radars, rain gauges, stream sensors, etc.) and flood inundation models are staged on a user-friendly maps environment that is accessible to the general public. Providing cross sectional analyses between HAZUS-MH and IFIS datasets, emergency managers are able to evaluate flood damage during flood events easier and more accessible in real time conditions. With matching data from HAZUS-MH census tract layer and IFC gauges, economical effects of flooding can be observed and evaluated by decision makers. The system will also provide visualization of the data by using augmented reality for see-through displays. Emergency management experts can take advantage of this visualization mode to manage flood response activities in real time. Also, forecast system developed by the Iowa Flood Center will be used to predict probable damage of the flood.

  13. Challenging Cognitive Control by Mirrored Stimuli in Working Memory Matching

    PubMed Central

    Wirth, Maria; Gaschler, Robert

    2017-01-01

    Cognitive conflict has often been investigated by placing automatic processing originating from learned associations in competition with instructed task demands. Here we explore whether mirror generalization as a congenital mechanism can be employed to create cognitive conflict. Past research suggests that the visual system automatically generates an invariant representation of visual objects and their mirrored counterparts (i.e., mirror generalization), and especially so for lateral reversals (e.g., a cup seen from the left side vs. right side). Prior work suggests that mirror generalization can be reduced or even overcome by learning (i.e., for those visual objects for which it is not appropriate, such as letters d and b). We, therefore, minimized prior practice on resolving conflicts involving mirror generalization by using kanji stimuli as non-verbal and unfamiliar material. In a 1-back task, participants had to check a stream of kanji stimuli for identical repetitions and avoid miss-categorizing mirror reversed stimuli as exact repetitions. Consistent with previous work, lateral reversals led to profound slowing of reaction times and lower accuracy in Experiment 1. Yet, different from previous reports suggesting that lateral reversals lead to stronger conflict, similar slowing for vertical and horizontal mirror transformations was observed in Experiment 2. Taken together, the results suggest that transformations of visual stimuli can be employed to challenge cognitive control in the 1-back task. PMID:28503160

  14. Teaching school teachers to recognize respiratory distress in asthmatic children.

    PubMed

    Sapien, Robert E; Fullerton-Gleason, L; Allen, N

    2004-10-01

    To demonstrate that school teachers can be taught to recognize respiratory distress in asthmatic children. Forty-five school teachers received a one-hour educational session on childhood asthma. Each education session consisted of two portions, video footage of asthmatic children exhibiting respiratory distress and didactic. Pre- and posttests on general asthma knowledge, signs of respiratory distress on video footage and comfort level with asthma knowledge and medications were administered. General asthma knowledge median scores increased significantly, pre = 60% correct, post = 70% (p < 0.0001). The ability to visually recognize respiratory distress also significantly improved (pre-median = 66.7% correct, post = 88.9% [p < 0.0001]). Teachers' comfort level with asthma knowledge and medications improved. Using video footage, school teachers can be taught to visually recognize respiratory distress in asthmatic children. Improvement in visual recognition of respiratory distress was greater than improvement in didactic asthma information.

  15. Technique and cue selection for graphical presentation of generic hyperdimensional data

    NASA Astrophysics Data System (ADS)

    Howard, Lee M.; Burton, Robert P.

    2013-12-01

    Several presentation techniques have been created for visualization of data with more than three variables. Packages have been written, each of which implements a subset of these techniques. However, these packages generally fail to provide all the features needed by the user during the visualization process. Further, packages generally limit support for presentation techniques to a few techniques. A new package called Petrichor accommodates all necessary and useful features together in one system. Any presentation technique may be added easily through an extensible plugin system. Features are supported by a user interface that allows easy interaction with data. Annotations allow users to mark up visualizations and share information with others. By providing a hyperdimensional graphics package that easily accommodates presentation techniques and includes a complete set of features, including those that are rarely or never supported elsewhere, the user is provided with a tool that facilitates improved interaction with multivariate data to extract and disseminate information.

  16. Implantability, Complications, and Follow-Up After Transjugular Intrahepatic Portosystemic Stent-Shunt Creation With the 6F Self-Expanding Sinus-SuperFlex-Visual Stent.

    PubMed

    Spira, Daniel; Wiskirchen, Jakub; Lauer, Ulrich; Ketelsen, Dominik; Nikolaou, Konstantin; Wiesinger, Benjamin

    2016-07-01

    The transjugular intrahepatic portosystemic stent-shunt (TIPSS) builds a shortcut between the portal vein and a liver vein, and represents a sophisticated alternative to open surgery in the management of portal hypertension or its complications. To describe clinical experiences with a low-profile nitinol stent system in TIPSS creation, and to assess primary and long-term success. Twenty-six patients (5 females, 21 males; mean age 54.6 years) were treated using a low-profile 6F self-expanding sinus-SuperFlex-Visual stent system. The indication for TIPSS creation was refractory bleeding in 9 of the 26 patients, refractory ascites in 18 patients, and acute thrombosis of the portal vein confluence in one patient. Portosystemic pressure gradients before and after TIPSS, periprocedural and long-term complications, and the time to orthotopic liver transplantation (OLT) or death were recorded. The portosystemic pressure gradient was significantly reduced, from 20.9 ± 6.3 mmHg before to 8.2 ± 2.3 mmHg after TIPSS creation (P < 0.001). Procedure-related complications included acute tract occlusion (n = 2), liver hematoma (n = 1), hepatic encephalopathy (n = 1), and cardiac failure (n = 1). Three of the 26 patients had late-onset TIPSS occlusion (at 12, 12, and 39 months after TIPSS creation). Three patients died within one week after the procedure due to their poor general condition (multiorgan failure, acute respiratory distress syndrome, necrotizing pancreatitis, and aspiration pneumonia). Another four patients succumbed to their underlying advanced liver disease within one year after TIPSS insertion. Seven patients underwent OLT at a mean time of 9.4 months after TIPSS creation. The sinus-SuperFlex-Visual stent system can be safely deployed as a TIPSS device. The pressure gradient reduction was clinically sufficient to treat the patients' symptoms, and periprocedural complications were due to the TIPSS procedure per se rather than to the particular stent system employed in this study.

  17. Drugs given by a syringe driver: a prospective multicentre survey of palliative care services in the UK.

    PubMed

    Wilcock, Andrew; Jacob, Jayin K; Charlesworth, Sarah; Harris, Elayne; Gibbs, Margaret; Allsop, Helen

    2006-10-01

    The use of a syringe driver to administer drugs by continuous subcutaneous infusion is common practice in the UK. Over time, drug combinations used in a syringe driver are likely to change and the aim of this survey was to obtain a more recent snapshot of practice. On four separate days, at two-week intervals, a questionnaire was completed for every syringe driver in use by 15 palliative care services. Of 336 syringe drivers, the majority contained either two or three drugs, but one-fifth contained only one drug. The median (range) volume of the infusions was 15 (9.5-48) mL, and duration of infusion was generally 24 hours. Only one combination was reported as visually incompatible, and there were 13 site reactions (4% of total). Laboratory physical and chemical compatibility data are available for less than half of the most frequently used combinations.

  18. The Timing of Visual Object Categorization

    PubMed Central

    Mack, Michael L.; Palmeri, Thomas J.

    2011-01-01

    An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction. PMID:21811480

  19. The visual properties of proximal and remote distractors differentially influence reaching planning times: evidence from pro- and antipointing tasks.

    PubMed

    Heath, Matthew; DeSimone, Jesse C

    2016-11-01

    The saccade literature has consistently reported that the presentation of a distractor remote to a target increases reaction time (i.e., the remote distractor effect: RDE). As well, some studies have shown that a proximal distractor facilitates saccade reaction time. The lateral inhibition hypothesis attributes the aforementioned findings to the inhibition/facilitation of target selection mechanisms operating in the intermediate layers of the superior colliculus (SC). Although the impact of remote and proximal distractors has been extensively examined in the saccade literature, a paucity of work has examined whether such findings generalize to reaching responses, and to our knowledge, no work has directly contrasted reaching RTs for remote and proximal distractors. To that end, the present investigation had participants complete reaches in target only trials (i.e., TO) and when distractors were presented at "remote" (i.e., the opposite visual field) and "proximal" (i.e., the same visual field) locations along the same horizontal meridian as the target. As well, participants reached to the target's veridical (i.e., propointing) and mirror-symmetrical (i.e., antipointing) location. The basis for contrasting pro- and antipointing was to determine whether the distractor's visual- or motor-related activity influence reaching RTs. Results demonstrated that remote and proximal distractors, respectively, increased and decreased reaching RTs and the effect was consistent for pro- and antipointing. Accordingly, results evince that the RDE and the facilitatory effects of a proximal distractor are effector independent and provide behavioral support for the contention that the SC serves as a general target selection mechanism. As well, the comparable distractor-related effects for pro- and antipointing trials indicate that the visual properties of remote and proximal distractors respectively inhibit and facilitate target selection.

  20. Multiscale Poincaré plots for visualizing the structure of heartbeat time series.

    PubMed

    Henriques, Teresa S; Mariani, Sara; Burykin, Anton; Rodrigues, Filipa; Silva, Tiago F; Goldberger, Ary L

    2016-02-09

    Poincaré delay maps are widely used in the analysis of cardiac interbeat interval (RR) dynamics. To facilitate visualization of the structure of these time series, we introduce multiscale Poincaré (MSP) plots. Starting with the original RR time series, the method employs a coarse-graining procedure to create a family of time series, each of which represents the system's dynamics in a different time scale. Next, the Poincaré plots are constructed for the original and the coarse-grained time series. Finally, as an optional adjunct, color can be added to each point to represent its normalized frequency. We illustrate the MSP method on simulated Gaussian white and 1/f noise time series. The MSP plots of 1/f noise time series reveal relative conservation of the phase space area over multiple time scales, while those of white noise show a marked reduction in area. We also show how MSP plots can be used to illustrate the loss of complexity when heartbeat time series from healthy subjects are compared with those from patients with chronic (congestive) heart failure syndrome or with atrial fibrillation. This generalized multiscale approach to Poincaré plots may be useful in visualizing other types of time series.

  1. Visual cortex entrains to sign language.

    PubMed

    Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel

    2017-06-13

    Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at [Formula: see text]1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.

  2. [Solar retinopathy].

    PubMed

    Kawa, P; Mańkowska, A; Mackiewicz, J; Zagórski, Z

    1998-01-01

    The purpose of this study is the present clinical evaluation of 21 patients (number of affected eyes--33), who watched eclipse of the sun on 12 October 1996. All patients had general ophthalmic examination with emphasis on visual acuity, visual field, Amsler test, fluorescein angiography and fundus appearance. Eleven out of 21 patients had at least one follow up examination (number of affected eyes--17). None of the patient received any treatment. All patients revealed tiny, central scotomata--positive Amsler test and decreased visual acuity on the first visit; reading Snellen chart could be improved in all patients by adequate head tilt or eye movement (improvement up to 3 Snellen chart lines). No signs of retinopathy were observed in two eyes with uncorrected refractive error and one amblyopic eye. After 7-8 weeks the visual acuity was decreased to 5/30 in two eyes and to 5/10 in ten eyes. In all those eyes persisted a tiny, central scotoma. Looking at the eclipse of the sun in spite of using primitive eye protection may cause irreversible retinal damage. Return of visual acuity to 5/5 does not always imply complete recovery because of persistent central scotoma.

  3. Space flight visual simulation.

    PubMed

    Xu, L

    1985-01-01

    In this paper, based on the scenes of stars seen by astronauts in their orbital flights, we have studied the mathematical model which must be constructed for CGI system to realize the space flight visual simulation. Considering such factors as the revolution and rotation of the Earth, exact date, time and site of orbital injection of the spacecraft, as well as its orbital flight and attitude motion, etc., we first defined all the instantaneous lines of sight and visual fields of astronauts in space. Then, through a series of coordinate transforms, the pictures of the scenes of stars changing with time-space were photographed one by one mathematically. In the procedure, we have designed a method of three-times "mathematical cutting." Finally, we obtained each instantaneous picture of the scenes of stars observed by astronauts through the window of the cockpit. Also, the dynamic conditions shaded by the Earth in the varying pictures of scenes of stars could be displayed.

  4. Electroretinography and Visual Evoked Potentials in Childhood Brain Tumor Survivors.

    PubMed

    Pietilä, Sari; Lenko, Hanna L; Oja, Sakari; Koivisto, Anna-Maija; Pietilä, Timo; Mäkipernaa, Anne

    2016-07-01

    This population-based cross-sectional study evaluates the clinical value of electroretinography and visual evoked potentials in childhood brain tumor survivors. A flash electroretinography and a checkerboard reversal pattern visual evoked potential (or alternatively a flash visual evoked potential) were done for 51 survivors (age 3.8-28.7 years) after a mean follow-up time of 7.6 (1.5-15.1) years. Abnormal electroretinography was obtained in 1 case, bilaterally delayed abnormal visual evoked potentials in 22/51 (43%) cases. Nine of 25 patients with infratentorial tumor location, and altogether 12 out of 31 (39%) patients who did not have tumors involving the visual pathways, had abnormal visual evoked potentials. Abnormal electroretinographies are rarely observed, but abnormal visual evoked potentials are common even without evident anatomic lesions in the visual pathway. Bilateral changes suggest a general and possibly multifactorial toxic/adverse effect on the visual pathway. Electroretinography and visual evoked potential may have clinical and scientific value while evaluating long-term effects of childhood brain tumors and tumor treatment. © The Author(s) 2016.

  5. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    PubMed

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  6. The effect of linguistic and visual salience in visual world studies.

    PubMed

    Cavicchio, Federica; Melcher, David; Poesio, Massimo

    2014-01-01

    Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material-including verbs, prepositions and adjectives-can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present.

  7. "Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; vanGelder, Allen

    1999-01-01

    During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.

  8. Social Cognition as Reinforcement Learning: Feedback Modulates Emotion Inference.

    PubMed

    Zaki, Jamil; Kallman, Seth; Wimmer, G Elliott; Ochsner, Kevin; Shohamy, Daphna

    2016-09-01

    Neuroscientific studies of social cognition typically employ paradigms in which perceivers draw single-shot inferences about the internal states of strangers. Real-world social inference features much different parameters: People often encounter and learn about particular social targets (e.g., friends) over time and receive feedback about whether their inferences are correct or incorrect. Here, we examined this process and, more broadly, the intersection between social cognition and reinforcement learning. Perceivers were scanned using fMRI while repeatedly encountering three social targets who produced conflicting visual and verbal emotional cues. Perceivers guessed how targets felt and received feedback about whether they had guessed correctly. Visual cues reliably predicted one target's emotion, verbal cues predicted a second target's emotion, and neither reliably predicted the third target's emotion. Perceivers successfully used this information to update their judgments over time. Furthermore, trial-by-trial learning signals-estimated using two reinforcement learning models-tracked activity in ventral striatum and ventromedial pFC, structures associated with reinforcement learning, and regions associated with updating social impressions, including TPJ. These data suggest that learning about others' emotions, like other forms of feedback learning, relies on domain-general reinforcement mechanisms as well as domain-specific social information processing.

  9. Ultrasound visual feedback in articulation therapy following partial glossectomy.

    PubMed

    Blyth, Katrina M; Mccabe, Patricia; Madill, Catherine; Ballard, Kirrie J

    2016-01-01

    Disordered speech is common following treatment for tongue cancer, however there is insufficient high quality evidence to guide clinical decision making about treatment. This study investigated use of ultrasound tongue imaging as a visual feedback tool to guide tongue placement during articulation therapy with two participants following partial glossectomy. A Phase I multiple baseline design across behaviors was used to investigate therapeutic effect of ultrasound visual feedback during speech rehabilitation. Percent consonants correct and speech intelligibility at sentence level were used to measure acquisition, generalization and maintenance of speech skills for treated and untreated related phonemes, while unrelated phonemes were tested to demonstrate experimental control. Swallowing and oromotor measures were also taken to monitor change. Sentence intelligibility was not a sensitive measure of speech change, but both participants demonstrated significant change in percent consonants correct for treated phonemes. One participant also demonstrated generalization to non-treated phonemes. Control phonemes along with swallow and oromotor measures remained stable throughout the study. This study establishes therapeutic benefit of ultrasound visual feedback in speech rehabilitation following partial glossectomy. Readers will be able to explain why and how tongue cancer surgery impacts on articulation precision. Readers will also be able to explain the acquisition, generalization and maintenance effects in the study. Copyright © 2016. Published by Elsevier Inc.

  10. Timing variation in an analytically solvable chaotic system

    NASA Astrophysics Data System (ADS)

    Blakely, J. N.; Milosavljevic, M. S.; Corron, N. J.

    2017-02-01

    We present analytic solutions for a chaotic dynamical system that do not have the regular timing characteristic of recently reported solvable chaotic systems. The dynamical system can be viewed as a first order filter with binary feedback. The feedback state may be switched only at instants defined by an external clock signal. Generalizing from a period one clock, we show analytic solutions for period two and higher period clocks. We show that even when the clock 'ticks' randomly the chaotic system has an analytic solution. These solutions can be visualized in a stroboscopic map whose complexity increases with the complexity of the clock. We provide both analytic results as well as experimental data from an electronic circuit implementation of the system. Our findings bridge the gap between the irregular timing of well known chaotic systems such as Lorenz and Rossler and the well regulated oscillations of recently reported solvable chaotic systems.

  11. Direct manipulation of virtual objects

    NASA Astrophysics Data System (ADS)

    Nguyen, Long K.

    Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.

  12. A Bit of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Oss, Stefano; Rosi, Tommaso

    2015-04-01

    We have developed an app for iOS-based smart-phones/tablets that allows a 3-D, complex phase-based colorful visualization of hydrogen atom wave functions. Several important features of the quantum behavior of atomic orbitals can easily be made evident, thus making this app a useful companion in introductory modern physics classes. There are many reasons why quantum mechanical systems and phenomena are difficult both to teach and deeply understand. They are described by equations that are generally hard to visualize, and they often oppose the so-called "common sense" based on the human perception of the world, which is built on mental images such as locality and causality. Moreover students cannot have direct experience of those systems and solutions, and generally do not even have the possibility to refer to pictures, videos, or experiments to fill this gap. Teachers often encounter quite serious troubles in finding out a sensible way to speak about the wonders of quantum physics at the high school level, where complex formalisms are not accessible at all. One should however consider that this is quite a common issue in physics and, more generally, in science education. There are plenty of natural phenomena whose models (not only at microscopic and atomic levels) are of difficult, if not impossible, visualization. Just think of certain kinds of waves, fields of forces, velocities, energy, angular momentum, and so on. One should also notice that physical reality is not the same as the images we make of it. Pictures (formal, abstract ones, as well as artists' views) are a convenient bridge between these two aspects.

  13. The role of visual perception measures used in sports vision programmes in predicting actual game performance in Division I collegiate hockey players.

    PubMed

    Poltavski, Dmitri; Biberdorf, David

    2015-01-01

    Abstract In the growing field of sports vision little is still known about unique attributes of visual processing in ice hockey and what role visual processing plays in the overall athlete's performance. In the present study we evaluated whether visual, perceptual and cognitive/motor variables collected using the Nike SPARQ Sensory Training Station have significant relevance to the real game statistics of 38 Division I collegiate male and female hockey players. The results demonstrated that 69% of variance in the goals made by forwards in 2011-2013 could be predicted by their faster reaction time to a visual stimulus, better visual memory, better visual discrimination and a faster ability to shift focus between near and far objects. Approximately 33% of variance in game points was significantly related to better discrimination among competing visual stimuli. In addition, reaction time to a visual stimulus as well as stereoptic quickness significantly accounted for 24% of variance in the mean duration of the player's penalty time. This is one of the first studies to show that some of the visual skills that state-of-the-art generalised sports vision programmes are purported to target may indeed be important for hockey players' actual performance on the ice.

  14. Seeing with sound? exploring different characteristics of a visual-to-auditory sensory substitution device.

    PubMed

    Brown, David; Macpherson, Tom; Ward, Jamie

    2011-01-01

    Sensory substitution devices convert live visual images into auditory signals, for example with a web camera (to record the images), a computer (to perform the conversion) and headphones (to listen to the sounds). In a series of three experiments, the performance of one such device ('The vOICe') was assessed under various conditions on blindfolded sighted participants. The main task that we used involved identifying and locating objects placed on a table by holding a webcam (like a flashlight) or wearing it on the head (like a miner's light). Identifying objects on a table was easier with a hand-held device, but locating the objects was easier with a head-mounted device. Brightness converted into loudness was less effective than the reverse contrast (dark being loud), suggesting that performance under these conditions (natural indoor lighting, novice users) is related more to the properties of the auditory signal (ie the amount of noise in it) than the cross-modal association between loudness and brightness. Individual differences in musical memory (detecting pitch changes in two sequences of notes) was related to the time taken to identify or recognise objects, but individual differences in self-reported vividness of visual imagery did not reliably predict performance across the experiments. In general, the results suggest that the auditory characteristics of the device may be more important for initial learning than visual associations.

  15. Time-Hierarchical Clustering and Visualization of Weather Forecast Ensembles.

    PubMed

    Ferstl, Florian; Kanzler, Mathias; Rautenhaus, Marc; Westermann, Rudiger

    2017-01-01

    We propose a new approach for analyzing the temporal growth of the uncertainty in ensembles of weather forecasts which are started from perturbed but similar initial conditions. As an alternative to traditional approaches in meteorology, which use juxtaposition and animation of spaghetti plots of iso-contours, we make use of contour clustering and provide means to encode forecast dynamics and spread in one single visualization. Based on a given ensemble clustering in a specified time window, we merge clusters in time-reversed order to indicate when and where forecast trajectories start to diverge. We present and compare different visualizations of the resulting time-hierarchical grouping, including space-time surfaces built by connecting cluster representatives over time, and stacked contour variability plots. We demonstrate the effectiveness of our visual encodings with forecast examples of the European Centre for Medium-Range Weather Forecasts, which convey the evolution of specific features in the data as well as the temporally increasing spatial variability.

  16. Intensive video gaming improves encoding speed to visual short-term memory in young male adults.

    PubMed

    Wilms, Inge L; Petersen, Anders; Vangkilde, Signe

    2013-01-01

    The purpose of this study was to measure the effect of action video gaming on central elements of visual attention using Bundesen's (1990) Theory of Visual Attention. To examine the cognitive impact of action video gaming, we tested basic functions of visual attention in 42 young male adults. Participants were divided into three groups depending on the amount of time spent playing action video games: non-players (<2h/month, N=12), casual players (4-8h/month, N=10), and experienced players (>15h/month, N=20). All participants were tested in three tasks which tap central functions of visual attention and short-term memory: a test based on the Theory of Visual Attention (TVA), an enumeration test and finally the Attentional Network Test (ANT). The results show that action video gaming does not seem to impact the capacity of visual short-term memory. However, playing action video games does seem to improve the encoding speed of visual information into visual short-term memory and the improvement does seem to depend on the time devoted to gaming. This suggests that intense action video gaming improves basic attentional functioning and that this improvement generalizes into other activities. The implications of these findings for cognitive rehabilitation training are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model.

    PubMed

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander

    2015-04-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.

  18. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model

    PubMed Central

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher

    2015-01-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106

  19. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  20. Developing a Data Visualization System for the Bank of America Chicago Marathon (Chicago, Illinois USA).

    PubMed

    Hanken, Taylor; Young, Sam; Smilowitz, Karen; Chiampas, George; Waskowski, David

    2016-10-01

    As one of the largest marathons worldwide, the Bank of America Chicago Marathon (BACCM; Chicago, Illinois USA) accumulates high volumes of data. Race organizers and engaged agencies need the ability to access specific data in real-time. This report details a data visualization system designed for the Chicago Marathon and establishes key principles for event management data visualization. The data visualization system allows for efficient data communication among the organizing agencies of Chicago endurance events. Agencies can observe the progress of the race throughout the day and obtain needed information, such as the number and location of runners on the course and current weather conditions. Implementation of the system can reduce time-consuming, face-to-face interactions between involved agencies by having key data streams in one location, streamlining communications with the purpose of improving race logistics, as well as medical preparedness and response. Hanken T , Young S , Smilowitz K , Chiampas G , Waskowski D . Developing a data visualization system for the Bank of America Chicago Marathon (Chicago, Illinois USA). Prehosp Disaster Med. 2016;31(5):572-577.

  1. Effect of Cognitive Demand on Functional Visual Field Performance in Senior Drivers with Glaucoma

    PubMed Central

    Gangeddula, Viswa; Ranchet, Maud; Akinwuntan, Abiodun E.; Bollinger, Kathryn; Devos, Hannes

    2017-01-01

    Purpose: To investigate the effect of cognitive demand on functional visual field performance in drivers with glaucoma. Method: This study included 20 drivers with open-angle glaucoma and 13 age- and sex-matched controls. Visual field performance was evaluated under different degrees of cognitive demand: a static visual field condition (C1), dynamic visual field condition (C2), and dynamic visual field condition with active driving (C3) using an interactive, desktop driving simulator. The number of correct responses (accuracy) and response times on the visual field task were compared between groups and between conditions using Kruskal–Wallis tests. General linear models were employed to compare cognitive workload, recorded in real-time through pupillometry, between groups and conditions. Results: Adding cognitive demand (C2 and C3) to the static visual field test (C1) adversely affected accuracy and response times, in both groups (p < 0.05). However, drivers with glaucoma performed worse than did control drivers when the static condition changed to a dynamic condition [C2 vs. C1 accuracy; glaucoma: median difference (Q1–Q3) 3 (2–6.50) vs. controls: 2 (0.50–2.50); p = 0.05] and to a dynamic condition with active driving [C3 vs. C1 accuracy; glaucoma: 2 (2–6) vs. controls: 1 (0.50–2); p = 0.02]. Overall, drivers with glaucoma exhibited greater cognitive workload than controls (p = 0.02). Conclusion: Cognitive demand disproportionately affects functional visual field performance in drivers with glaucoma. Our results may inform the development of a performance-based visual field test for drivers with glaucoma. PMID:28912712

  2. Distortions of Subjective Time Perception Within and Across Senses

    PubMed Central

    van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan

    2008-01-01

    Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248

  3. The theoretical cognitive process of visualization for science education.

    PubMed

    Mnguni, Lindelani E

    2014-01-01

    The use of visual models such as pictures, diagrams and animations in science education is increasing. This is because of the complex nature associated with the concepts in the field. Students, especially entrant students, often report misconceptions and learning difficulties associated with various concepts especially those that exist at a microscopic level, such as DNA, the gene and meiosis as well as those that exist in relatively large time scales such as evolution. However the role of visual literacy in the construction of knowledge in science education has not been investigated much. This article explores the theoretical process of visualization answering the question "how can visual literacy be understood based on the theoretical cognitive process of visualization in order to inform the understanding, teaching and studying of visual literacy in science education?" Based on various theories on cognitive processes during learning for science and general education the author argues that the theoretical process of visualization consists of three stages, namely, Internalization of Visual Models, Conceptualization of Visual Models and Externalization of Visual Models. The application of this theoretical cognitive process of visualization and the stages of visualization in science education are discussed.

  4. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.

    PubMed

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.

  5. Computers Are for Kids: Designing Software Programs to Avoid Problems of Learning.

    ERIC Educational Resources Information Center

    Grimes, Lynn

    1981-01-01

    Procedures for programing computers to deal with handicapped students, problems in selective attention, visual discrimination, reaction time differences, short term memory, transfer and generalization, recognition of mistakes, and social skills are discussed. (CL)

  6. 37 CFR 201.25 - Visual Arts Registry.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2012-07-01 2012-07-01 false Visual Arts Registry. 201.25... AND PROCEDURES GENERAL PROVISIONS § 201.25 Visual Arts Registry. (a) General. This section prescribes the procedures relating to the submission of Visual Arts Registry Statements by visual artists and...

  7. 37 CFR 201.25 - Visual Arts Registry.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Visual Arts Registry. 201.25... AND PROCEDURES GENERAL PROVISIONS § 201.25 Visual Arts Registry. (a) General. This section prescribes the procedures relating to the submission of Visual Arts Registry Statements by visual artists and...

  8. 37 CFR 201.25 - Visual Arts Registry.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Visual Arts Registry. 201.25... AND PROCEDURES GENERAL PROVISIONS § 201.25 Visual Arts Registry. (a) General. This section prescribes the procedures relating to the submission of Visual Arts Registry Statements by visual artists and...

  9. 37 CFR 201.25 - Visual Arts Registry.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Visual Arts Registry. 201.25... AND PROCEDURES GENERAL PROVISIONS § 201.25 Visual Arts Registry. (a) General. This section prescribes the procedures relating to the submission of Visual Arts Registry Statements by visual artists and...

  10. Audiovisual speech perception development at varying levels of perceptual processing

    PubMed Central

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318

  11. Audiovisual speech perception development at varying levels of perceptual processing.

    PubMed

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  12. Search time critically depends on irrelevant subset size in visual search.

    PubMed

    Benjamins, Jeroen S; Hooge, Ignace T C; van Elst, Jacco C; Wertheim, Alexander H; Verstraten, Frans A J

    2009-02-01

    In order for our visual system to deal with the massive amount of sensory input, some of this input is discarded, while other parts are processed [Wolfe, J. M. (1994). Guided search 2.0: a revised model of visual search. Psychonomic Bulletin and Review, 1, 202-238]. From the visual search literature it is unclear how well one set of items can be selected that differs in only one feature from target (a 1F set), while another set of items can be ignored that differs in two features from target (a 2F set). We systematically varied the percentage of 2F non-targets to determine the contribution of these non-targets to search behaviour. Increasing the percentage 2F non-targets, that have to be ignored, was expected to result in increasingly faster search, since it decreases the size of 1F set that has to be searched. Observers searched large displays for a target in the 1F set with a variable percentage of 2F non-targets. Interestingly, when the search displays contained 5% 2F non-targets, the search time was longer compared to the search time in other conditions. This effect of 2F non-targets on performance was independent of set size. An inspection of the saccades revealed that saccade target selection did not contribute to the longer search times in displays with 5% 2F non-targets. Occurrence of longer search times in displays containing 5% 2F non-targets might be attributed to covert processes related to visual analysis of the fixated part of the display. Apparently, visual search performance critically depends on the percentage of irrelevant 2F non-targets.

  13. Dynamic and predictive links between touch and vision.

    PubMed

    Gray, Rob; Tan, Hong Z

    2002-07-01

    We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.

  14. High-frequency spectral ultrasound imaging (SUSI) visualizes early post-traumatic heterotopic ossification (HO) in a mouse model.

    PubMed

    Ranganathan, Kavitha; Hong, Xiaowei; Cholok, David; Habbouche, Joe; Priest, Caitlin; Breuler, Christopher; Chung, Michael; Li, John; Kaura, Arminder; Hsieh, Hsiao Hsin Sung; Butts, Jonathan; Ucer, Serra; Schwartz, Ean; Buchman, Steven R; Stegemann, Jan P; Deng, Cheri X; Levi, Benjamin

    2018-04-01

    Early treatment of heterotopic ossification (HO) is currently limited by delayed diagnosis due to limited visualization at early time points. In this study, we validate the use of spectral ultrasound imaging (SUSI) in an animal model to detect HO as early as one week after burn tenotomy. Concurrent SUSI, micro CT, and histology at 1, 2, 4, and 9weeks post-injury were used to follow the progression of HO after an Achilles tenotomy and 30% total body surface area burn (n=3-5 limbs per time point). To compare the use of SUSI in different types of injury models, mice (n=5 per group) underwent either burn/tenotomy or skin incision injury and were imaged using a 55MHz probe on VisualSonics VEVO 770 system at one week post injury to evaluate the ability of SUSI to distinguish between edema and HO. Average acoustic concentration (AAC) and average scatterer diameter (ASD) were calculated for each ultrasound image frame. Micro CT was used to calculate the total volume of HO. Histology was used to confirm bone formation. Using SUSI, HO was visualized as early as 1week after injury. HO was visualized earliest by 4weeks after injury by micro CT. The average acoustic concentration of HO was 33% more than that of the control limb (n=5). Spectroscopic foci of HO present at 1week that persisted throughout all time points correlated with the HO present at 9weeks on micro CT imaging. SUSI visualizes HO as early as one week after injury in an animal model. SUSI represents a new imaging modality with promise for early diagnosis of HO. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. 37 CFR 201.25 - Visual Arts Registry.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Visual Arts Registry. 201.25... OFFICE AND PROCEDURES GENERAL PROVISIONS § 201.25 Visual Arts Registry. (a) General. This section prescribes the procedures relating to the submission of Visual Arts Registry Statements by visual artists and...

  16. Stimulus- and goal-driven control of eye movements: action videogame players are faster but not better.

    PubMed

    Heimler, Benedetta; Pavani, Francesco; Donk, Mieke; van Zoest, Wieske

    2014-11-01

    Action videogame players (AVGPs) have been shown to outperform nongamers (NVGPs) in covert visual attention tasks. These advantages have been attributed to improved top-down control in this population. The time course of visual selection, which permits researchers to highlight when top-down strategies start to control performance, has rarely been investigated in AVGPs. Here, we addressed specifically this issue through an oculomotor additional-singleton paradigm. Participants were instructed to make a saccadic eye movement to a unique orientation singleton. The target was presented among homogeneous nontargets and one additional orientation singleton that was more, equally, or less salient than the target. Saliency was manipulated in the color dimension. Our results showed similar patterns of performance for both AVGPs and NVGPs: Fast-initiated saccades were saliency-driven, whereas later-initiated saccades were more goal-driven. However, although AVGPs were faster than NVGPs, they were also less accurate. Importantly, a multinomial model applied to the data revealed comparable underlying saliency-driven and goal-driven functions for the two groups. Taken together, the observed differences in performance are compatible with the presence of a lower decision bound for releasing saccades in AVGPs than in NVGPs, in the context of comparable temporal interplay between the underlying attentional mechanisms. In sum, the present findings show that in both AVGPs and NVGPs, the implementation of top-down control in visual selection takes time to come about, and they argue against the idea of a general enhancement of top-down control in AVGPs.

  17. Python for large-scale electrophysiology.

    PubMed

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation ("dimstim"); one for electrophysiological waveform visualization and spike sorting ("spyke"); and one for spike train and stimulus analysis ("neuropy"). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.

  18. Relating interesting quantitative time series patterns with text events and text features

    NASA Astrophysics Data System (ADS)

    Wanner, Franz; Schreck, Tobias; Jentner, Wolfgang; Sharalieva, Lyubka; Keim, Daniel A.

    2013-12-01

    In many application areas, the key to successful data analysis is the integrated analysis of heterogeneous data. One example is the financial domain, where time-dependent and highly frequent quantitative data (e.g., trading volume and price information) and textual data (e.g., economic and political news reports) need to be considered jointly. Data analysis tools need to support an integrated analysis, which allows studying the relationships between textual news documents and quantitative properties of the stock market price series. In this paper, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which reflect quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a-priori method. First, based on heuristics we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a-priori method supports the discovery of such sequential temporal patterns. Then, various text features like the degree of sentence nesting, noun phrase complexity, the vocabulary richness, etc. are extracted from the news to obtain meta patterns. Meta patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time-, cluster- and sequence visualization and analysis functionality. We provide two case studies, showing the effectiveness of our combined quantitative and textual analysis work flow. The workflow can also be generalized to other application domains such as data analysis of smart grids, cyber physical systems or the security of critical infrastructure, where the data consists of a combination of quantitative and textual time series data.

  19. Cycle-specific female preferences for visual and non-visual cues in the horse (Equus caballus)

    PubMed Central

    Burger, Dominik; Meuwly, Charles; Thomas, Selina; Sieme, Harald; Oberthür, Michael; Wedekind, Claus; Meinecke-Tillmann, Sabine

    2018-01-01

    Although female preferences are well studied in many mammals, the possible effects of the oestrous cycle are not yet sufficiently understood. Here we investigate female preferences for visual and non-visual male traits relative to the periodically cycling of sexual proceptivity (oestrus) and inactivity (dioestrus), respectively, in the polygynous horse (Equus caballus). We individually exposed mares to stallions in four experimental situations: (i) mares in oestrus and visual contact to stallions allowed, (ii) mares in oestrus, with blinds (wooden partitions preventing visual contact but allowing for acoustic and olfactory communication), (iii) mares in dioestrus, no blinds, and (iv) mares in dioestrus, with blinds. Contact times of the mares with each stallion, defined as the cumulative amount of time a mare was in the vicinity of an individual stallion and actively searching contact, were used to rank stallions according to each mare’s preferences. We found that preferences based on visual traits differed significantly from preferences based on non-visual traits in dioestrous mares. The mares then showed a preference for older and larger males, but only if visual cues were available. In contrast, oestrous mares showed consistent preferences with or without blinds, i.e. their preferences were mainly based on non-visual traits and could not be predicted by male age or size. Stallions who were generally preferred displayed a high libido that may have positively influenced female interest or may have been a consequence of it. We conclude that the oestrous cycle has a significant influence on female preferences for visual and non-visual male traits in the horse. PMID:29466358

  20. Visual grouping under isoluminant condition: impact of mental fatigue

    NASA Astrophysics Data System (ADS)

    Pladere, Tatjana; Bete, Diana; Skilters, Jurgis; Krumina, Gunta

    2016-09-01

    Instead of selecting arbitrary elements our visual perception prefers only certain grouping of information. There is ample evidence that the visual attention and perception is substantially impaired in the presence of mental fatigue. The question is how visual grouping, which can be considered a bottom-up controlled neuronal gain mechanism, is influenced. The main purpose of our study is to determine the influence of mental fatigue on visual grouping of definite information - color and configuration of stimuli in the psychophysical experiment. Individuals provided subjective data by filling in the questionnaire about their health and general feeling. The objective evidence was obtained in the specially designed visual search task were achromatic and chromatic isoluminant stimuli were used in order to avoid so called pop-out effect due to differences in light intensity. Each individual was instructed to define the symbols with aperture in the same direction in four tasks. The color component differed in the visual search tasks according to the goals of study. The results reveal that visual grouping is completed faster when visual stimuli have the same color and aperture direction. The shortest reaction time is in the evening. What is more, the results of reaction time suggest that the analysis of two grouping processes compete for selective attention in the visual system when similarity in color conflicts with similarity in configuration of stimuli. The described effect increases significantly in the presence of mental fatigue. But it does not have strong influence on the accuracy of task accomplishment.

  1. Interval timing in children: effects of auditory and visual pacing stimuli and relationships with reading and attention variables.

    PubMed

    Birkett, Emma E; Talcott, Joel B

    2012-01-01

    Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.

  2. The contributions of visual and central attention to visual working memory.

    PubMed

    Souza, Alessandra S; Oberauer, Klaus

    2017-10-01

    We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.

  3. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  4. Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion.

    PubMed

    Zhou, Feng; De la Torre, Fernando; Hodgins, Jessica K

    2013-03-01

    Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.

  5. Visual impairment attributable to uncorrected refractive error and other causes in the Ghanaian youth: The University of Cape Coast Survey.

    PubMed

    Abokyi, Samuel; Ilechie, Alex; Nsiah, Peter; Darko-Takyi, Charles; Abu, Emmanuel Kwasi; Osei-Akoto, Yaw Jnr; Youfegan-Baanam, Mathurin

    2016-01-01

    To determine the prevalence of visual impairment attributable to refractive error and other causes in a youthful Ghanaian population. A prospective survey of all consecutive visits by first-year tertiary students to the Optometry clinic between August, 2013 and April, 2014. Of the 4378 first-year students aged 16-39 years enumerated, 3437 (78.5%) underwent the eye examination. The examination protocol included presenting visual acuity (PVA), ocular motility, and slit-lamp examination of the external eye, anterior segment and media, and non-dilated fundus examination. Pinhole acuity and fundus examination were performed when the PVA≤6/12 in one or both eyes to determine the principal cause of the vision loss. The mean age of participants was 21.86 years (95% CI: 21.72-21.99). The prevalence of bilateral visual impairment (BVI; PVA in the better eye ≤6/12) and unilateral visual impairment UVI; PVA in the worse eye ≤6/12) were 3.08% (95% CI: 2.56-3.72) and 0.79% (95% CI: 0.54-1.14), respectively. Among 106 participants with BVI, refractive error (96.2%) and corneal opacity (3.8%) were the causes. Of the 27 participants with UVI, refractive error (44.4%), maculopathy (18.5%) and retinal disease (14.8%) were the major causes. There was unequal distribution of BVI in the different age groups, with those above 20 years having a lesser burden. Eye screening and provision of affordable spectacle correction to the youth could be timely to eliminate visual impairment. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  6. Visualization Component of Vehicle Health Decision Support System

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Turmon, Michael; Stough, Timothy; Siegel, Herbert; Walter, patrick; Kurt, Cindy

    2008-01-01

    The visualization front-end of a Decision Support System (DSS) also includes an analysis engine linked to vehicle telemetry, and a database of learned models for known behaviors. Because the display is graphical rather than text-based, the summarization it provides has a greater information density on one screen for evaluation by a flight controller.This tool provides a system-level visualization of the state of a vehicle, and drill-down capability for more details and interfaces to separate analysis algorithms and sensor data streams. The system-level view is a 3D rendering of the vehicle, with sensors represented as icons, tied to appropriate positions within the vehicle body and colored to indicate sensor state (e.g., normal, warning, anomalous state, etc.). The sensor data is received via an Information Sharing Protocol (ISP) client that connects to an external server for real-time telemetry. Users can interactively pan, zoom, and rotate this 3D view, as well as select sensors for a detail plot of the associated time series data. Subsets of the plotted data can be selected and sent to an external analysis engine to either search for a similar time series in an historical database, or to detect anomalous events. The system overview and plotting capabilities are completely general in that they can be applied to any vehicle instrumented with a collection of sensors. This visualization component can interface with the ISP for data streams used by NASA s Mission Control Center at Johnson Space Center. In addition, it can connect to, and display results from, separate analysis engine components that identify anomalies or that search for past instances of similar behavior. This software supports NASA's Software, Intelligent Systems, and Modeling element in the Exploration Systems Research and Technology Program by augmenting the capability of human flight controllers to make correct decisions, thus increasing safety and reliability. It was designed specifically as a tool for NASA's flight controllers to monitor the International Space Station and a future Crew Exploration Vehicle.

  7. Streptococcus endophthalmitis outbreak after intravitreal injection of bevacizumab: one-year outcomes and investigative results.

    PubMed

    Goldberg, Roger A; Flynn, Harry W; Miller, Darlene; Gonzalez, Serafin; Isom, Ryan F

    2013-07-01

    To report the 1-year clinical outcomes of an outbreak of Streptococcus endophthalmitis after intravitreal injection of bevacizumab, including visual acuity outcomes, microbiological testing, and compound pharmacy investigations by the Food and Drug Administration (FDA). Retrospective consecutive case series. Twelve eyes of 12 patients who developed endophthalmitis after receiving intravitreal bevacizumab prepared by a single compounding pharmacy. Medical records of patients were reviewed; phenotypic and DNA analyses were performed on microbes cultured from patients and from unused syringes. An inspection report by the FDA based on site visits to the pharmacy that prepared the bevacizumab syringes was summarized. Visual acuity, interventions received, time to intervention, microbiological consistency, and FDA inspection findings. Between July 5 and 8, 2011, 12 patients developed endophthalmitis after intravitreal bevacizumab from syringes prepared by a single compounding pharmacy. All patients received initial vitreous tap and injection, and 8 patients (67%) subsequently underwent pars plana vitrectomy (PPV). After 12 months follow-up, outcomes have been poor. Seven patients (58%) required evisceration or enucleation, and only 1 patient regained pre-injection visual acuity. Molecular testing using real-time polymerase chain reaction, partial sequencing of the groEL gene, and multilocus sequencing of 7 housekeeping genes confirmed the presence of a common strain of Streptococcus mitis/oralis in vitreous specimens and 7 unused syringes prepared by the compounding pharmacy at the same time. An FDA investigation of the compounding pharmacy noted deviations from standard sterile technique, inconsistent documentation, and inadequate testing of equipment required for safe preparation of medications. In this outbreak of endophthalmitis, outcomes have been generally poor, and PPV did not improve visual results at 1-year follow-up. Molecular testing confirmed a common strain of S. mitis/oralis. Contamination seems to have occurred at the compounding pharmacy, where numerous problems in sterile technique were noted by public health investigators. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  8. The Pivotal Role of the Right Parietal Lobe in Temporal Attention.

    PubMed

    Agosta, Sara; Magnago, Denise; Tyler, Sarah; Grossman, Emily; Galante, Emanuela; Ferraro, Francesco; Mazzini, Nunzia; Miceli, Gabriele; Battelli, Lorella

    2017-05-01

    The visual system is extremely efficient at detecting events across time even at very fast presentation rates; however, discriminating the identity of those events is much slower and requires attention over time, a mechanism with a much coarser resolution [Cavanagh, P., Battelli, L., & Holcombe, A. O. Dynamic attention. In A. C. Nobre & S. Kastner (Eds.), The Oxford handbook of attention (pp. 652-675). Oxford: Oxford University Press, 2013]. Patients affected by right parietal lesion, including the TPJ, are severely impaired in discriminating events across time in both visual fields [Battelli, L., Cavanagh, P., & Thornton, I. M. Perception of biological motion in parietal patients. Neuropsychologia, 41, 1808-1816, 2003]. One way to test this ability is to use a simultaneity judgment task, whereby participants are asked to indicate whether two events occurred simultaneously or not. We psychophysically varied the frequency rate of four flickering disks, and on most of the trials, one disk (either in the left or right visual field) was flickering out-of-phase relative to the others. We asked participants to report whether two left-or-right-presented disks were simultaneous or not. We tested a total of 23 right and left parietal lesion patients in Experiment 1, and only right parietal patients showed impairment in both visual fields while their low-level visual functions were normal. Importantly, to causally link the right TPJ to the relative timing processing, we ran a TMS experiment on healthy participants. Participants underwent three stimulation sessions and performed the same simultaneity judgment task before and after 20 min of low-frequency inhibitory TMS over right TPJ, left TPJ, or early visual area as a control. rTMS over the right TPJ caused a bilateral impairment in the simultaneity judgment task, whereas rTMS over left TPJ or over early visual area did not affect performance. Altogether, our results directly link the right TPJ to the processing of relative time.

  9. PRODIGEN: visualizing the probability landscape of stochastic gene regulatory networks in state and time space.

    PubMed

    Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta

    2017-02-15

    Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.

  10. Instant Gratification: Striking a Balance Between Rich Interactive Visualization and Ease of Use for Casual Web Surfers

    NASA Astrophysics Data System (ADS)

    Russell, R. M.; Johnson, R. M.; Gardiner, E. S.; Bergman, J. J.; Genyuk, J.; Henderson, S.

    2004-12-01

    Interactive visualizations can be powerful tools for helping students, teachers, and the general public comprehend significant features in rich datasets and complex systems. Successful use of such visualizations requires viewers to have, or to acquire, adequate expertise in use of the relevant visualization tools. In many cases, the learning curve associated with competent use of such tools is too steep for casual users, such as members of the lay public browsing science outreach web sites or K-12 students and teachers trying to integrate such tools into their learning about geosciences. "Windows to the Universe" (http://www.windows.ucar.edu) is a large (roughly 6,000 web pages), well-established (first posted online in 1995), and popular (over 5 million visitor sessions and 40 million pages viewed per year) science education web site that covers a very broad range of Earth science and space science topics. The primary audience of the site consists of K-12 students and teachers and the general public. We have developed several interactive visualizations for use on the site in conjunction with text and still image reference materials. One major emphasis in the design of these interactives has been to ensure that casual users can quickly learn how to use the interactive features without becoming frustrated and departing before they were able to appreciate the visualizations displayed. We will demonstrate several of these "user-friendly" interactive visualizations and comment on the design philosophy we have employed in developing them.

  11. Contrast normalization contributes to a biologically-plausible model of receptive-field development in primary visual cortex (V1)

    PubMed Central

    Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.

    2012-01-01

    Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381

  12. Adaptive Optics Analysis of Visual Benefit with Higher-order Aberrations Correction of Human Eye - Poster Paper

    NASA Astrophysics Data System (ADS)

    Xue, Lixia; Dai, Yun; Rao, Xuejun; Wang, Cheng; Hu, Yiyun; Liu, Qian; Jiang, Wenhan

    2008-01-01

    Higher-order aberrations correction can improve visual performance of human eye to some extent. To evaluate how much visual benefit can be obtained with higher-order aberrations correction we developed an adaptive optics vision simulator (AOVS). Dynamic real time optimized modal compensation was used to implement various customized higher-order ocular aberrations correction strategies. The experimental results indicate that higher-order aberrations correction can improve visual performance of human eye comparing with only lower-order aberration correction but the improvement degree and higher-order aberration correction strategy are different from each individual. Some subjects can acquire great visual benefit when higher-order aberrations were corrected but some subjects acquire little visual benefit even though all higher-order aberrations were corrected. Therefore, relative to general lower-order aberrations correction strategy, customized higher-order aberrations correction strategy is needed to obtain optimal visual improvement for each individual. AOVS provides an effective tool for higher-order ocular aberrations optometry for customized ocular aberrations correction.

  13. Psychoanatomical substrates of Bálint's syndrome

    PubMed Central

    Rizzo, M; Vecera, S

    2002-01-01

    Objectives: From a series of glimpses, we perceive a seamless and richly detailed visual world. Cerebral damage, however, can destroy this illusion. In the case of Bálint's syndrome, the visual world is perceived erratically, as a series of single objects. The goal of this review is to explore a range of psychological and anatomical explanations for this striking visual disorder and to propose new directions for interpreting the findings in Bálint's syndrome and related cerebral disorders of visual processing. Methods: Bálint's syndrome is reviewed in the light of current concepts and methodologies of vision research. Results: The syndrome affects visual perception (causing simultanagnosia/visual disorientation) and visual control of eye and hand movement (causing ocular apraxia and optic ataxia). Although it has been generally construed as a biparietal syndrome causing an inability to see more than one object at a time, other lesions and mechanisms are also possible. Key syndrome components are dissociable and comprise a range of disturbances that overlap the hemineglect syndrome. Inouye's observations in similar cases, beginning in 1900, antedated Bálint's initial report. Because Bálint's syndrome is not common and is difficult to assess with standard clinical tools, the literature is dominated by case reports and confounded by case selection bias, non-uniform application of operational definitions, inadequate study of basic vision, poor lesion localisation, and failure to distinguish between deficits in the acute and chronic phases of recovery. Conclusions: Studies of Bálint's syndrome have provided unique evidence on neural substrates for attention, perception, and visuomotor control. Future studies should address possible underlying psychoanatomical mechanisms at "bottom up" and "top down" levels, and should specifically consider visual working memory and attention (including object based attention) as well as systems for identification of object structure and depth from binocular stereopsis, kinetic depth, motion parallax, eye movement signals, and other cues. PMID:11796765

  14. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. The influence of agility training on physiological and cognitive performance.

    PubMed

    Lennemann, Lynette M; Sidrow, Kathryn M; Johnson, Erica M; Harrison, Catherine R; Vojta, Christopher N; Walker, Thomas B

    2013-12-01

    Agility training (AT) has recently been instituted in several military communities in hopes of improving combat performance and general fitness. The purpose of this study was to determine how substituting AT for traditional military physical training (PT) influences physical and cognitive performance. Forty-one subjects undergoing military technical training were divided randomly into 2 groups for 6 weeks of training. One group participated in standard military PT consisting of calisthenics and running. A second group duplicated the amount of exercise of the first group but used AT as their primary mode of training. Before and after training, subjects completed a physical and cognitive battery of tests including V[Combining Dot Above]O2max, reaction time, Illinois Agility Test, body composition, visual vigilance, dichotic listening, and working memory tests. There were significant improvements within the AT group in V[Combining Dot Above]O2max, Illinois Agility Test, visual vigilance, and continuous memory. There was a significant increase in time-to-exhaustion for the traditional group. We conclude that AT is as effective or more effective as PT in enhancing physical fitness. Further, it is potentially more effective than PT in enhancing specific measures of physical and cognitive performance, such as physical agility, memory, and vigilance. Consequently, we suggest that AT be incorporated into existing military PT programs as a way to improve war-fighter performance. Further, it seems likely that the benefits of AT observed here occur in various other populations.

  16. Power spectrum model of visual masking: simulations and empirical data.

    PubMed

    Serrano-Pedraza, Ignacio; Sierra-Vázquez, Vicente; Derrington, Andrew M

    2013-06-01

    In the study of the spatial characteristics of the visual channels, the power spectrum model of visual masking is one of the most widely used. When the task is to detect a signal masked by visual noise, this classical model assumes that the signal and the noise are previously processed by a bank of linear channels and that the power of the signal at threshold is proportional to the power of the noise passing through the visual channel that mediates detection. The model also assumes that this visual channel will have the highest ratio of signal power to noise power at its output. According to this, there are masking conditions where the highest signal-to-noise ratio (SNR) occurs in a channel centered in a spatial frequency different from the spatial frequency of the signal (off-frequency looking). Under these conditions the channel mediating detection could vary with the type of noise used in the masking experiment and this could affect the estimation of the shape and the bandwidth of the visual channels. It is generally believed that notched noise, white noise and double bandpass noise prevent off-frequency looking, and high-pass, low-pass and bandpass noises can promote it independently of the channel's shape. In this study, by means of a procedure that finds the channel that maximizes the SNR at its output, we performed numerical simulations using the power spectrum model to study the characteristics of masking caused by six types of one-dimensional noise (white, high-pass, low-pass, bandpass, notched, and double bandpass) for two types of channel's shape (symmetric and asymmetric). Our simulations confirm that (1) high-pass, low-pass, and bandpass noises do not prevent the off-frequency looking, (2) white noise satisfactorily prevents the off-frequency looking independently of the shape and bandwidth of the visual channel, and interestingly we proved for the first time that (3) notched and double bandpass noises prevent off-frequency looking only when the noise cutoffs around the spatial frequency of the signal match the shape of the visual channel (symmetric or asymmetric) involved in the detection. In order to test the explanatory power of the model with empirical data, we performed six visual masking experiments. We show that this model, with only two free parameters, fits the empirical masking data with high precision. Finally, we provide equations of the power spectrum model for six masking noises used in the simulations and in the experiments.

  17. STDP in lateral connections creates category-based perceptual cycles for invariance learning with multiple stimuli.

    PubMed

    Evans, Benjamin D; Stringer, Simon M

    2015-04-01

    Learning to recognise objects and faces is an important and challenging problem tackled by the primate ventral visual system. One major difficulty lies in recognising an object despite profound differences in the retinal images it projects, due to changes in view, scale, position and other identity-preserving transformations. Several models of the ventral visual system have been successful in coping with these issues, but have typically been privileged by exposure to only one object at a time. In natural scenes, however, the challenges of object recognition are typically further compounded by the presence of several objects which should be perceived as distinct entities. In the present work, we explore one possible mechanism by which the visual system may overcome these two difficulties simultaneously, through segmenting unseen (artificial) stimuli using information about their category encoded in plastic lateral connections. We demonstrate that these experience-guided lateral interactions robustly organise input representations into perceptual cycles, allowing feed-forward connections trained with spike-timing-dependent plasticity to form independent, translation-invariant output representations. We present these simulations as a functional explanation for the role of plasticity in the lateral connectivity of visual cortex.

  18. The impact of modality and working memory capacity on achievement in a multimedia environment

    NASA Astrophysics Data System (ADS)

    Stromfors, Charlotte M.

    This study explored the impact of working memory capacity and student learning in a dual modality, multimedia environment titled Visualizing Topography. This computer-based instructional program focused on the basic skills in reading and interpreting topographic maps. Two versions of the program presented the same instructional content but varied the modality of verbal information: the audio-visual condition coordinated topographic maps and narration; the visual-visual condition provided the same topographic maps with readable text. An analysis of covariance procedure was conducted to evaluate the effects due to the two conditions in relation to working memory capacity, controlling for individual differences in spatial visualization and prior knowledge. The scores on the Figural Intersection Test were used to separate subjects into three levels in terms of their measured working memory capacity: low, medium, and high. Subjects accessed Visualizing Topography by way of the Internet and proceeded independently through the program. The program architecture was linear in format. Subjects had a minimum amount of flexibility within each of five segments, but not between segments. One hundred and fifty-one subjects were randomly assigned to either the audio-visual or the visual-visual condition. The average time spent in the program was thirty-one minutes. The results of the ANCOVA revealed a small to moderate modality effect favoring an audio-visual condition. The results also showed that subjects with low and medium working capacity benefited more from the audio-visual condition than the visual-visual condition, while subjects with a high working memory capacity did not benefit from either condition. Although splitting the data reduced group sizes, ANCOVA results by gender suggested that the audio-visual condition favored females with low working memory capacities. The results have implications for designers of educational software, the teachers who select software, and the students themselves. Splitting information into two, non-redundant sources, one audio and one visual, may effectively extend working memory capacity. This is especially significant for the student population encountering difficult science concepts that require the formation and manipulation of mental representations. It is recommended that multimedia environments be designed or selected with attention to modality conditions that facilitate student learning.

  19. Multitime correlation functions in nonclassical stochastic processes

    NASA Astrophysics Data System (ADS)

    Krumm, F.; Sperling, J.; Vogel, W.

    2016-06-01

    A general method is introduced for verifying multitime quantum correlations through the characteristic function of the time-dependent P functional that generalizes the Glauber-Sudarshan P function. Quantum correlation criteria are derived which identify quantum effects for an arbitrary number of points in time. The Magnus expansion is used to visualize the impact of the required time ordering, which becomes crucial in situations when the interaction problem is explicitly time dependent. We show that the latter affects the multi-time-characteristic function and, therefore, the temporal evolution of the nonclassicality. As an example, we apply our technique to an optical parametric process with a frequency mismatch. The resulting two-time-characteristic function yields full insight into the two-time quantum correlation properties of such a system.

  20. Time course influences transfer of visual perceptual learning across spatial location.

    PubMed

    Larcombe, S J; Kennard, C; Bridge, H

    2017-06-01

    Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Early and Late Inhibitions Elicited by a Peripheral Visual Cue on Manual Response to a Visual Target: Are They Based on Cartesian Coordinates?

    ERIC Educational Resources Information Center

    Gawryszewski, Luiz G.; Carreiro, Luiz Renato R.; Magalhaes, Fabio V.

    2005-01-01

    A non-informative cue (C) elicits an inhibition of manual reaction time (MRT) to a visual target (T). We report an experiment to examine if the spatial distribution of this inhibitory effect follows Polar or Cartesian coordinate systems. C appeared at one out of 8 isoeccentric (7[degrees]) positions, the C-T angular distances (in polar…

  2. Temporal Expectations Guide Dynamic Prioritization in Visual Working Memory through Attenuated α Oscillations.

    PubMed

    van Ede, Freek; Niklaus, Marcel; Nobre, Anna C

    2017-01-11

    Although working memory is generally considered a highly dynamic mnemonic store, popular laboratory tasks used to understand its psychological and neural mechanisms (such as change detection and continuous reproduction) often remain relatively "static," involving the retention of a set number of items throughout a shared delay interval. In the current study, we investigated visual working memory in a more dynamic setting, and assessed the following: (1) whether internally guided temporal expectations can dynamically and reversibly prioritize individual mnemonic items at specific times at which they are deemed most relevant; and (2) the neural substrates that support such dynamic prioritization. Participants encoded two differently colored oriented bars into visual working memory to retrieve the orientation of one bar with a precision judgment when subsequently probed. To test for the flexible temporal control to access and retrieve remembered items, we manipulated the probability for each of the two bars to be probed over time, and recorded EEG in healthy human volunteers. Temporal expectations had a profound influence on working memory performance, leading to faster access times as well as more accurate orientation reproductions for items that were probed at expected times. Furthermore, this dynamic prioritization was associated with the temporally specific attenuation of contralateral α (8-14 Hz) oscillations that, moreover, predicted working memory access times on a trial-by-trial basis. We conclude that attentional prioritization in working memory can be dynamically steered by internally guided temporal expectations, and is supported by the attenuation of α oscillations in task-relevant sensory brain areas. In dynamic, everyday-like, environments, flexible goal-directed behavior requires that mental representations that are kept in an active (working memory) store are dynamic, too. We investigated working memory in a more dynamic setting than is conventional, and demonstrate that expectations about when mnemonic items are most relevant can dynamically and reversibly prioritize these items in time. Moreover, we uncover a neural substrate of such dynamic prioritization in contralateral visual brain areas and show that this substrate predicts working memory retrieval times on a trial-by-trial basis. This places the experimental study of working memory, and its neuronal underpinnings, in a more dynamic and ecologically valid context, and provides new insights into the neural implementation of attentional prioritization within working memory. Copyright © 2017 van Ede et al.

  3. Tensoral for post-processing users and simulation authors

    NASA Technical Reports Server (NTRS)

    Dresselhaus, Eliot

    1993-01-01

    The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.

  4. Visual acuity and refractive errors in a suburban Danish population: Inter99 Eye Study.

    PubMed

    Kessel, Line; Hougaard, Jesper Leth; Mortensen, Claus; Jørgensen, Torben; Lund-Andersen, Henrik; Larsen, Michael

    2004-02-01

    The present study was performed as part of an epidemiological study, the Inter99 Eye Study. The aim of the study was to describe refractive errors and visual acuity (VA) in a suburban Danish population. The Inter99 Eye Study comprised 970 subjects aged 30-60 years and included a random control group as well as groups at high risk for ischaemic heart disease and diabetes mellitus. The present study presents VAs and refractive data from the control group (n = 502). All subjects completed a detailed questionnaire and underwent a standardized general physical and ophthalmic examination including determination of best corrected VA and subjective refractioning. Visual acuity

  5. A matter of time: improvement of visual temporal processing during training-induced restoration of light detection performance

    PubMed Central

    Poggel, Dorothe A.; Treutwein, Bernhard; Sabel, Bernhard A.; Strasburger, Hans

    2015-01-01

    The issue of how basic sensory and temporal processing are related is still unresolved. We studied temporal processing, as assessed by simple visual reaction times (RT) and double-pulse resolution (DPR), in patients with partial vision loss after visual pathway lesions and investigated whether vision restoration training (VRT), a training program designed to improve light detection performance, would also affect temporal processing. Perimetric and campimetric visual field tests as well as maps of DPR thresholds and RT were acquired before and after a 3 months training period with VRT. Patient performance was compared to that of age-matched healthy subjects. Intact visual field size increased during training. Averaged across the entire visual field, DPR remained constant while RT improved slightly. However, in transition zones between the blind and intact areas (areas of residual vision) where patients had shown between 20 and 80% of stimulus detection probability in pre-training visual field tests, both DPR and RT improved markedly. The magnitude of improvement depended on the defect depth (or degree of intactness) of the respective region at baseline. Inter-individual training outcome variability was very high, with some patients showing little change and others showing performance approaching that of healthy controls. Training-induced improvement of light detection in patients with visual field loss thus generalized to dynamic visual functions. The findings suggest that similar neural mechanisms may underlie the impairment and subsequent training-induced functional recovery of both light detection and temporal processing. PMID:25717307

  6. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  7. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  8. Python for Large-Scale Electrophysiology

    PubMed Central

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation (“dimstim”); one for electrophysiological waveform visualization and spike sorting (“spyke”); and one for spike train and stimulus analysis (“neuropy”). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience. PMID:19198646

  9. A haptics-assisted cranio-maxillofacial surgery planning system for restoring skeletal anatomy in complex trauma cases.

    PubMed

    Olsson, Pontus; Nysjö, Fredrik; Hirsch, Jan-Michaél; Carlbom, Ingrid B

    2013-11-01

       Cranio-maxillofacial (CMF) surgery to restore normal skeletal anatomy in patients with serious trauma to the face can be both complex and time-consuming. But it is generally accepted that careful pre-operative planning leads to a better outcome with a higher degree of function and reduced morbidity in addition to reduced time in the operating room. However, today's surgery planning systems are primitive, relying mostly on the user's ability to plan complex tasks with a two-dimensional graphical interface.    A system for planning the restoration of skeletal anatomy in facial trauma patients using a virtual model derived from patient-specific CT data. The system combines stereo visualization with six degrees-of-freedom, high-fidelity haptic feedback that enables analysis, planning, and preoperative testing of alternative solutions for restoring bone fragments to their proper positions. The stereo display provides accurate visual spatial perception, and the haptics system provides intuitive haptic feedback when bone fragments are in contact as well as six degrees-of-freedom attraction forces for precise bone fragment alignment.    A senior surgeon without prior experience of the system received 45 min of system training. Following the training session, he completed a virtual reconstruction in 22 min of a complex mandibular fracture with an adequately reduced result.    Preliminary testing with one surgeon indicates that our surgery planning system, which combines stereo visualization with sophisticated haptics, has the potential to become a powerful tool for CMF surgery planning. With little training, it allows a surgeon to complete a complex plan in a short amount of time.

  10. Visualizing multiattribute Web transactions using a freeze technique

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Cotting, Daniel; Dayal, Umeshwar; Machiraju, Vijay; Garg, Pankaj

    2003-05-01

    Web transactions are multidimensional and have a number of attributes: client, URL, response times, and numbers of messages. One of the key questions is how to simultaneously lay out in a graph the multiple relationships, such as the relationships between the web client response times and URLs in a web access application. In this paper, we describe a freeze technique to enhance a physics-based visualization system for web transactions. The idea is to freeze one set of objects before laying out the next set of objects during the construction of the graph. As a result, we substantially reduce the force computation time. This technique consists of three steps: automated classification, a freeze operation, and a graph layout. These three steps are iterated until the final graph is generated. This iterated-freeze technique has been prototyped in several e-service applications at Hewlett Packard Laboratories. It has been used to visually analyze large volumes of service and sales transactions at online web sites.

  11. Causes of visual disability among Central Africans with diabetes mellitus.

    PubMed

    Mvitu Muaka, M; Longo-Mbenza, B

    2012-06-01

    Diabetic Retinopathy (DR) remains a common and one of the major causes of blindness in the developed and western societies. The same situation is shown in emerging economic areas (5,6). In sub-Saharan Africa (SSA) however, the issues of visual disability due to diabetes mellitus (DM) are overshadowed by the presence of the prevalent and common nutritional deficiency diseases and eye infections This clinic-based study was conducted to determine whether diabetic retinopathy is independently related to visual disability in black patients with diabetes mellitus (DM) from Kinshasa, Congo. A total of 299 urban patients with DM and low income including 108 cases of visual disability and matched for time admission and DM type to 191 controls, were assessed. Demographic, clinical, and ophthalmic data were assessed using univariate and multivariate analyses. Age ≥60 years, female sex, presence of diabetic retinopathy (DR), proliferative DR, shorter DM duration, glaucoma, macular oedema, diabetic nephropathy were the univariate risk factors of visual disability. Using logistic regression model, visual disability was significantly associated with female sex and diabetic retinopathy. The risk of visual disability is 4 times higher in patients with diabetic retinopathy and 2 times higher in females with DM. Therefore, to prevent further increase of visual disability, the Congolese Ministry of Health should prioritize the eye care in patients with DM.

  12. Visual detection of particulates in x-ray images of processed meat products

    NASA Astrophysics Data System (ADS)

    Schatzki, Thomas F.; Young, Richard; Haff, Ron P.; Eye, J.; Wright, G.

    1996-08-01

    A study was conducted to test the efficacy of detecting particulate contaminants in processed meat samples by visual observation of line-scanned x-ray images. Six hundred field- collected processed-product samples were scanned at 230 cm2/s using 0.5 X 0.5-mm resolution and 50 kV, 13 mA excitation. The x-ray images were image corrected, digitally stored, and inspected off-line, using interactive image enhancement. Forty percent of the samples were spiked with added contaminants to establish the visual recognition of contaminants as a function of sample thickness (1 to 10 cm), texture of the x-ray image (smooth/textured), spike composition (wood/bone/glass), size (0.1 to 0.4 cm), and shape (splinter/round). The results were analyzed using a maximum likelihood logistic regression method. In packages less than 6 cm thick, 0.2-cm-thick bone chips were easily recognized, 0.1-cm glass splinters were recognized with some difficulty, while 0.4-cm-thick wood was generally missed. Operational feasibility in a time-constrained setting was confirmed. One half percent of the samples arriving from the field contained bone slivers > 1 cm long, 1/2% contained metallic material, while 4% contained particulates exceeding 0.3 cm in size. All of the latter appeared to be bone fragments.

  13. The architecture of intuition: Fluency and affect determine intuitive judgments of semantic and visual coherence and judgments of grammaticality in artificial grammar learning.

    PubMed

    Topolinski, Sascha; Strack, Fritz

    2009-02-01

    People can intuitively detect whether a word triad has a common remote associate (coherent) or does not have one (incoherent) before and independently of actually retrieving the common associate. The authors argue that semantic coherence increases the processing fluency for coherent triads and that this increased fluency triggers a brief and subtle positive affect, which is the experiential basis of these intuitions. In a series of 11 experiments with 3 different fluency manipulations (figure-ground contrast, repeated exposure, and subliminal visual priming) and 3 different affect inductions (short-timed facial feedback, subliminal facial priming, and affect-laden word triads), high fluency and positive affect independently and additively increased the probability that triads would be judged as coherent, irrespective of actual coherence. The authors could equalize and even reverse coherence judgments (i.e., incoherent triads were judged to be coherent more frequently than were coherent triads). When explicitly instructed, participants were unable to correct their judgments for the influence of affect, although they were aware of the manipulation. The impact of fluency and affect was also generalized to intuitions of visual coherence and intuitions of grammaticality in an artificial grammar learning paradigm. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  14. Gaze control for an active camera system by modeling human pursuit eye movements

    NASA Astrophysics Data System (ADS)

    Toelg, Sebastian

    1992-11-01

    The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.

  15. Visual function in patients with cone-rod dystrophy (CRD) associated with mutations in the ABCA4(ABCR) gene.

    PubMed

    Birch, D G; Peters, A Y; Locke, K L; Spencer, R; Megarity, C F; Travis, G H

    2001-12-01

    Mutations in the ABCA4(ABCR) gene cause autosomal recessive Stargardt disease (STGD). ABCR mutations were identified in patients with cone-rod dystrophy (CRD) and retinitis pigmentosa (RP) by direct sequencing of all 50 exons in 40 patients. Of 10 patients with RP, one contained two ABCR mutations suggesting a compound heterozygote. This patient had a characteristic fundus appearance with attenuated vessels, pale disks and bone-spicule pigmentation. Rod electroretinograms (ERGs) were non-detectable, cone ERGs were greatly reduced in amplitude and delayed in implicit time, and visual fields were constricted to 10 degrees diameter. Eleven of 30 (37%) patients with CRD had mutations in ABCR. In general, these patients showed reduced but detectable rod ERG responses, reduced and delayed cone responses, and poor visual acuity. Rod photoresponses to high intensity flashes were of reduced maximum amplitude but showed normal values for the gain of phototransduction. Most CRD patients with mutations in ABCR showed delayed recovery of sensitivity (dark adaptation) following exposure to bright light. Pupils were also significantly smaller in these patients compared to controls at 30 min following light exposure, consistent with a persistent 'equivalent light' background due to the accumulation of a tentatively identified 'noisy' photoproduct. Copyright 2001 Academic Press.

  16. Longitudinal and cross-sectional analyses of visual field progression in participants of the Ocular Hypertension Treatment Study.

    PubMed

    Artes, Paul H; Chauhan, Balwantray C; Keltner, John L; Cello, Kim E; Johnson, Chris A; Anderson, Douglas R; Gordon, Mae O; Kass, Michael A

    2010-12-01

    To assess agreement between longitudinal and cross-sectional analyses for determining visual field progression in data from the Ocular Hypertension Treatment Study. Visual field data from 3088 eyes of 1570 participants (median follow-up, 7 years) were analyzed. Longitudinal analyses were performed using change probability with total and pattern deviation, and cross-sectional analyses were performed using the glaucoma hemifield test, corrected pattern standard deviation, and mean deviation. The rates of mean deviation and general height change were compared to estimate the degree of diffuse loss in emerging glaucoma. Agreement on progression in longitudinal and cross-sectional analyses ranged from 50% to 61% and remained nearly constant across a wide range of criteria. In contrast, agreement on absence of progression ranged from 97.0% to 99.7%, being highest for the stricter criteria. Analyses of pattern deviation were more conservative than analyses of total deviation, with a 3 to 5 times lesser incidence of progression. Most participants developing field loss had both diffuse and focal changes. Despite considerable overall agreement, 40% to 50% of eyes identified as having progressed with either longitudinal or cross-sectional analyses were identified with only one of the analyses. Because diffuse change is part of early glaucomatous damage, pattern deviation analyses may underestimate progression in patients with ocular hypertension.

  17. Youth with Visual Impairments: Experiences in General Physical Education

    ERIC Educational Resources Information Center

    Lieberman, Lauren J.; Robinson, Barbara L.; Rollheiser, Heidi

    2006-01-01

    The rapid increase in the number of students with visual impairments currently being educated in inclusive general physical education makes it important that physical education instructors know how best to serve them. Assessment of the experiences of students with visual impairments during general physical education classes, knowledge of students'…

  18. Spatial sequences, but not verbal sequences, are vulnerable to general interference during retention in working memory.

    PubMed

    Morey, Candice C; Miron, Monica D

    2016-12-01

    Among models of working memory, there is not yet a consensus about how to describe functions specific to storing verbal or visual-spatial memories. We presented aural-verbal and visual-spatial lists simultaneously and sometimes cued one type of information after presentation, comparing accuracy in conditions with and without informative retro-cues. This design isolates interference due specifically to maintenance, which appears most clearly in the uncued trials, from interference due to encoding, which occurs in all dual-task trials. When recall accuracy was comparable between tasks, we found that spatial memory was worse in uncued than in retro-cued trials, whereas verbal memory was not. Our findings bolster proposals that maintenance of spatial serial order, like maintenance of visual materials more broadly, relies on general rather than specialized resources, while maintenance of verbal sequences may rely on domain-specific resources. We argue that this asymmetry should be explicitly incorporated into models of working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. E-Readers and Visual Fatigue

    PubMed Central

    Benedetto, Simone; Drai-Zerbib, Véronique; Pedrotti, Marco; Tissier, Geoffrey; Baccino, Thierry

    2013-01-01

    The mass digitization of books is changing the way information is created, disseminated and displayed. Electronic book readers (e-readers) generally refer to two main display technologies: the electronic ink (E-ink) and the liquid crystal display (LCD). Both technologies have advantages and disadvantages, but the question whether one or the other triggers less visual fatigue is still open. The aim of the present research was to study the effects of the display technology on visual fatigue. To this end, participants performed a longitudinal study in which two last generation e-readers (LCD, E-ink) and paper book were tested in three different prolonged reading sessions separated by - on average - ten days. Results from both objective (Blinks per second) and subjective (Visual Fatigue Scale) measures suggested that reading on the LCD (Kindle Fire HD) triggers higher visual fatigue with respect to both the E-ink (Kindle Paperwhite) and the paper book. The absence of differences between E-ink and paper suggests that, concerning visual fatigue, the E-ink is indeed very similar to the paper. PMID:24386252

  20. Interactive Visualization of Large-Scale Hydrological Data using Emerging Technologies in Web Systems and Parallel Programming

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2013-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools developed within the light of these challenges.

  1. Visual development in primates: Neural mechanisms and critical periods

    PubMed Central

    Kiorpes, Lynne

    2015-01-01

    Despite many decades of research into the development of visual cortex, it remains unclear what neural processes set limitations on the development of visual function and define its vulnerability to abnormal visual experience. This selected review examines the development of visual function and its neural correlates, and highlights the fact that in most cases receptive field properties of infant neurons are substantially more mature than infant visual function. One exception is temporal resolution, which can be accounted for by resolution of neurons at the level of the LGN. In terms of spatial vision, properties of single neurons alone are not sufficient to account for visual development. Different visual functions develop over different time courses. Their onset may be limited by the existence of neural response properties that support a given perceptual ability, but the subsequent time course of maturation to adult levels remains unexplained. Several examples are offered suggesting that taking account of weak signaling by infant neurons, correlated firing, and pooled responses of populations of neurons brings us closer to an understanding of the relationship between neural and behavioral development. PMID:25649764

  2. Aging and feature search: the effect of search area.

    PubMed

    Burton-Danner, K; Owsley, C; Jackson, G R

    2001-01-01

    The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.

  3. Entering an era of dynamic structural biology….

    PubMed

    Orville, Allen M

    2018-05-31

    A recent paper in BMC Biology presents a general method for mix-and-inject serial crystallography, to facilitate the visualization of enzyme intermediates via time-resolved serial femtosecond crystallography (tr-SFX). They apply their method to resolve in near atomic detail the cleavage and inactivation of the antibiotic ceftriaxone by a β-lactamase enzyme from Mycobacterium tuberculosis. Their work demonstrates the general applicability of time-resolved crystallography, from which dynamic structures, at atomic resolution, can be obtained.See research article: https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-018-0524-5 .

  4. Sparse Contextual Activation for Efficient Visual Re-Ranking.

    PubMed

    Bai, Song; Bai, Xiang

    2016-03-01

    In this paper, we propose an extremely efficient algorithm for visual re-ranking. By considering the original pairwise distance in the contextual space, we develop a feature vector called sparse contextual activation (SCA) that encodes the local distribution of an image. Hence, re-ranking task can be simply accomplished by vector comparison under the generalized Jaccard metric, which has its theoretical meaning in the fuzzy set theory. In order to improve the time efficiency of re-ranking procedure, inverted index is successfully introduced to speed up the computation of generalized Jaccard metric. As a result, the average time cost of re-ranking for a certain query can be controlled within 1 ms. Furthermore, inspired by query expansion, we also develop an additional method called local consistency enhancement on the proposed SCA to improve the retrieval performance in an unsupervised manner. On the other hand, the retrieval performance using a single feature may not be satisfactory enough, which inspires us to fuse multiple complementary features for accurate retrieval. Based on SCA, a robust feature fusion algorithm is exploited that also preserves the characteristic of high time efficiency. We assess our proposed method in various visual re-ranking tasks. Experimental results on Princeton shape benchmark (3D object), WM-SRHEC07 (3D competition), YAEL data set B (face), MPEG-7 data set (shape), and Ukbench data set (image) manifest the effectiveness and efficiency of SCA.

  5. Simulator-induced spatial disorientation: effects of age, sleep deprivation, and type of conflict.

    PubMed

    Previc, Fred H; Ercoline, William R; Evans, Richard H; Dillon, Nathan; Lopez, Nadia; Daluz, Christina M; Workman, Andrew

    2007-05-01

    Spatial disorientation mishaps are greater at night and with greater time on task, and sleep deprivation is known to decrease cognitive and overall flight performance. However, the ability to perceive and to be influenced by physiologically appropriate simulated SD conflicts has not previously been studied in an automated simulator flight profile. A set of 10 flight profiles were flown by 10 U.S. Air Force (USAF) pilots over a period of 28 h in a specially designed flight simulator for spatial disorientation research and training. Of the 10 flights, 4 had a total of 7 spatial disorientation (SD) conflicts inserted into each of them, 5 simulating motion illusions and 2 involving visual illusions. The percentage of conflict reports was measured along with the effects of four conflicts on flight performance. The results showed that, with one exception, all motion conflicts were reported over 60% of the time, whereas the two visual illusions were reported on average only 25% of the time, although they both significantly affected flight performance. Pilots older than 35 yr of age were more likely to report conflicts than were those under 30 yr of age (63% vs. 38%), whereas fatigue had little effect overall on either recognized or unrecognized SD. The overall effects of these conflicts on perception and performance were generally not altered by sleep deprivation, despite clear indications of fatigue in our pilots.

  6. Accidental human laser retinal injuries from military laser systems

    NASA Astrophysics Data System (ADS)

    Stuck, Bruce E.; Zwick, Harry; Molchany, Jerome W.; Lund, David J.; Gagliano, Donald A.

    1996-04-01

    The time course of the ophthalmoscopic and functional consequences of eight human laser accident cases from military laser systems is described. All patients reported subjective vision loss with ophthalmoscopic evidence of retinal alteration ranging from vitreous hemorrhage to retinal burn. Five of the cases involved single or multiple exposures to Q-switched neodymium radiation at close range whereas the other three incidents occur over large ranges. Most exposures were within 5 degrees of the foveola, yet none directly in the foveola. High contrast visual activity improved with time except in the cases with progressive retinal fibrosis between lesion sites or retinal hole formation encroaching the fovea. In one patient the visual acuity recovered from 20/60 at one week to 20/25 in four months with minimal central visual field loss. Most cases showed suppression of high and low spatial frequency contrast sensitivity. Visual field measurements were enlarged relative to ophthalmoscopic lesion size observations. Deep retinal scar formation and retinal traction were evident in two of the three cases with vitreous hemorrhage. In one patient, nerve fiber layer damage to the papillo-macular bundle was clearly evident. Visual performance measured with a pursuit tracking task revealed significant performance loss relative to normal tracking observers even in cases where acuity returned to near normal levels. These functional and performance deficits may reflect secondary effects of parafoveal laser injury.

  7. Introductory Earth science education by near real time animated visualization of seismic wave propagation across Transportable Array of USArray

    NASA Astrophysics Data System (ADS)

    Attanayake, J.; Ghosh, A.; Amosu, A.

    2010-12-01

    Students of this generation are markedly different from their predecessors because they grow up and learn in a world of visual technology populated by touch screens and smart boards. Recent studies have found that the attention span of university students whose medium of instruction is traditional teaching methods is roughly fifteen minutes and that there is a significant drop in the number of students paying attention over time in a lecture. On the other hand, when carefully segmented and learner-paced, animated visualizations can enhance the learning experience. Therefore, the instructors are faced with the difficult task of designing more complex teaching environments to improve learner productivity. We have developed an animated visualization of earthquake wave propagation across a generic transect of the Transportable Array of the USArray from a magnitude 6.9 event that occurred in the Gulf of California on August 3rd 2009. Despite the fact that the proto-type tool is built in MATLAB - one of the most popular programming environments among the seismology community, the movies can be run as a standalone stream with any built-in media player that supports .avi file format. We infer continuous ground motion along the transect through a projection and interpolation mechanism based on data from stations within 100 km of the transect. In the movies we identify the arrival of surface waves that have high amplitudes. However, over time, although typical Rayleigh type ground motion can be observed, the motion at any given point becomes complex owing to interference of different wave types and different seismic properties of the subsurface. This clearly is different from simple representations of seismic wave propagation in most introductory textbooks. Further, we find a noisy station that shows unusually high amplitude. We refrain from deleting this station in order to demonstrate that in a real world experiment, generally, there will be complexities arising from unexpected behavior of instruments and/or the system under investigation. Explaining such behavior and exploring ways to minimize biases arising from it is an important lesson to learn in introductory science classes. This program can generate visualizations of ground motion from events in the Gulf of California in near real time and with little further development, from events elsewhere.

  8. Place avoidance learning and memory in a jumping spider.

    PubMed

    Peckmezian, Tina; Taylor, Phillip W

    2017-03-01

    Using a conditioned passive place avoidance paradigm, we investigated the relative importance of three experimental parameters on learning and memory in a salticid, Servaea incana. Spiders encountered an aversive electric shock stimulus paired with one side of a two-sided arena. Our three parameters were the ecological relevance of the visual stimulus, the time interval between trials and the time interval before test. We paired electric shock with either a black or white visual stimulus, as prior studies in our laboratory have demonstrated that S. incana prefer dark 'safe' regions to light ones. We additionally evaluated the influence of two temporal features (time interval between trials and time interval before test) on learning and memory. Spiders exposed to the shock stimulus learned to associate shock with the visual background cue, but the extent to which they did so was dependent on which visual stimulus was present and the time interval between trials. Spiders trained with a long interval between trials (24 h) maintained performance throughout training, whereas spiders trained with a short interval (10 min) maintained performance only when the safe side was black. When the safe side was white, performance worsened steadily over time. There was no difference between spiders tested after a short (10 min) or long (24 h) interval before test. These results suggest that the ecological relevance of the stimuli used and the duration of the interval between trials can influence learning and memory in jumping spiders.

  9. Cortico-basal ganglia networks subserving goal-directed behavior mediated by conditional visuo-goal association

    PubMed Central

    Hoshi, Eiji

    2013-01-01

    Action is often executed according to information provided by a visual signal. As this type of behavior integrates two distinct neural representations, perception and action, it has been thought that identification of the neural mechanisms underlying this process will yield deeper insights into the principles underpinning goal-directed behavior. Based on a framework derived from conditional visuomotor association, prior studies have identified neural mechanisms in the dorsal premotor cortex (PMd), dorsolateral prefrontal cortex (dlPFC), ventrolateral prefrontal cortex (vlPFC), and basal ganglia (BG). However, applications resting solely on this conceptualization encounter problems related to generalization and flexibility, essential processes in executive function, because the association mode involves a direct one-to-one mapping of each visual signal onto a particular action. To overcome this problem, we extend this conceptualization and postulate a more general framework, conditional visuo-goal association. According to this new framework, the visual signal identifies an abstract behavioral goal, and an action is subsequently selected and executed to meet this goal. Neuronal activity recorded from the four key areas of the brains of monkeys performing a task involving conditional visuo-goal association revealed three major mechanisms underlying this process. First, visual-object signals are represented primarily in the vlPFC and BG. Second, all four areas are involved in initially determining the goals based on the visual signals, with the PMd and dlPFC playing major roles in maintaining the salience of the goals. Third, the cortical areas play major roles in specifying action, whereas the role of the BG in this process is restrictive. These new lines of evidence reveal that the four areas involved in conditional visuomotor association contribute to goal-directed behavior mediated by conditional visuo-goal association in an area-dependent manner. PMID:24155692

  10. Visual Analytics of integrated Data Systems for Space Weather Purposes

    NASA Astrophysics Data System (ADS)

    Rosa, Reinaldo; Veronese, Thalita; Giovani, Paulo

    Analysis of information from multiple data sources obtained through high resolution instrumental measurements has become a fundamental task in all scientific areas. The development of expert methods able to treat such multi-source data systems, with both large variability and measurement extension, is a key for studying complex scientific phenomena, especially those related to systemic analysis in space and environmental sciences. In this talk, we present a time series generalization introducing the concept of generalized numerical lattice, which represents a discrete sequence of temporal measures for a given variable. In this novel representation approach each generalized numerical lattice brings post-analytical data information. We define a generalized numerical lattice as a set of three parameters representing the following data properties: dimensionality, size and post-analytical measure (e.g., the autocorrelation, Hurst exponent, etc)[1]. From this representation generalization, any multi-source database can be reduced to a closed set of classified time series in spatiotemporal generalized dimensions. As a case study, we show a preliminary application in space science data, highlighting the possibility of a real time analysis expert system. In this particular application, we have selected and analyzed, using detrended fluctuation analysis (DFA), several decimetric solar bursts associated to X flare-classes. The association with geomagnetic activity is also reported. DFA method is performed in the framework of a radio burst automatic monitoring system. Our results may characterize the variability pattern evolution, computing the DFA scaling exponent, scanning the time series by a short windowing before the extreme event [2]. For the first time, the application of systematic fluctuation analysis for space weather purposes is presented. The prototype for visual analytics is implemented in a Compute Unified Device Architecture (CUDA) by using the K20 Nvidia graphics processing units (GPUs) to reduce the integrated analysis runtime. [1] Veronese et al. doi: 10.6062/jcis.2009.01.02.0021, 2010. [2] Veronese et al. doi:http://dx.doi.org/10.1016/j.jastp.2010.09.030, 2011.

  11. Auditory and Visual Cues for Topic Maintenance with Persons Who Exhibit Dementia of Alzheimer's Type.

    PubMed

    Teten, Amy F; Dagenais, Paul A; Friehe, Mary J

    2015-01-01

    This study compared the effectiveness of auditory and visual redirections in facilitating topic coherence for persons with Dementia of Alzheimer's Type (DAT). Five persons with moderate stage DAT engaged in conversation with the first author. Three topics related to activities of daily living, recreational activities, food, and grooming, were broached. Each topic was presented three times to each participant: once as a baseline condition, once with auditory redirection to topic, and once with visual redirection to topic. Transcripts of the interactions were scored for overall coherence. Condition was a significant factor in that the DAT participants exhibited better topic maintenance under visual and auditory conditions as opposed to baseline. In general, the performance of the participants was not affected by the topic, except for significantly higher overall coherence ratings for the visually redirected interactions dealing with the topic of food.

  12. Auditory and Visual Cues for Topic Maintenance with Persons Who Exhibit Dementia of Alzheimer's Type

    PubMed Central

    Teten, Amy F.; Dagenais, Paul A.; Friehe, Mary J.

    2015-01-01

    This study compared the effectiveness of auditory and visual redirections in facilitating topic coherence for persons with Dementia of Alzheimer's Type (DAT). Five persons with moderate stage DAT engaged in conversation with the first author. Three topics related to activities of daily living, recreational activities, food, and grooming, were broached. Each topic was presented three times to each participant: once as a baseline condition, once with auditory redirection to topic, and once with visual redirection to topic. Transcripts of the interactions were scored for overall coherence. Condition was a significant factor in that the DAT participants exhibited better topic maintenance under visual and auditory conditions as opposed to baseline. In general, the performance of the participants was not affected by the topic, except for significantly higher overall coherence ratings for the visually redirected interactions dealing with the topic of food. PMID:26171273

  13. Automatic classification of visual evoked potentials based on wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz

    2017-04-01

    Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.

  14. Instructor and student pilots' subjective evaluation of a general aviation simulator with a terrain visual system

    NASA Technical Reports Server (NTRS)

    Kiteley, G. W.; Harris, R. L., Sr.

    1978-01-01

    Ten student pilots were given a 1 hour training session in the NASA Langley Research Center's General Aviation Simulator by a certified flight instructor and a follow-up flight evaluation was performed by the student's own flight instructor, who has also flown the simulator. The students and instructors generally felt that the simulator session had a positive effect on the students. They recommended that a simulator with a visual scene and a motion base would be useful in performing such maneuvers as: landing approaches, level flight, climbs, dives, turns, instrument work, and radio navigation, recommending that the simulator would be an efficient means of introducing the student to new maneuvers before doing them in flight. The students and instructors estimated that about 8 hours of simulator time could be profitably devoted to the private pilot training.

  15. 78 FR 65180 - Airworthiness Directives; MD Helicopters, Inc., Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... reducing the retirement life of each tail rotor blade (blade), performing a one-time visual inspection of... required reporting information to the FAA within 24 hours following the one-time inspection. Since we... pitting and the shot peen surface's condition in addition to cracks and corrosion, and adds certain part...

  16. Effects of frequency shifts and visual gender information on vowel category judgments

    NASA Astrophysics Data System (ADS)

    Glidden, Catherine; Assmann, Peter F.

    2003-10-01

    Visual morphing techniques were used together with a high-quality vocoder to study the audiovisual contribution of talker gender to the identification of frequency-shifted vowels. A nine-step continuum ranging from ``bit'' to ``bet'' was constructed from natural recorded syllables spoken by an adult female talker. Upward and downward frequency shifts in spectral envelope (scale factors of 0.85 and 1.0) were applied in combination with shifts in fundamental frequency, F0 (scale factors of 0.5 and 1.0). Downward frequency shifts generally resulted in malelike voices whereas upward shifts were perceived as femalelike. Two separate nine-step visual continua from ``bit'' to ``bet'' were also constructed, one from a male face and the other a female face, each producing the end-point words. Each step along the two visual continua was paired with the corresponding step on the acoustic continuum, creating natural audiovisual utterances. Category boundary shifts were found for both acoustic cues (F0 and formant frequency shifts) and visual cues (visual gender). The visual gender effect was larger when acoustic and visual information were matched appropriately. These results suggest that visual information provided by the speech signal plays an important supplemental role in talker normalization.

  17. Early visual responses predict conscious face perception within and between subjects during binocular rivalry

    PubMed Central

    Sandberg, Kristian; Bahrami, Bahador; Kanai, Ryota; Barnes, Gareth Robert; Overgaard, Morten; Rees, Geraint

    2014-01-01

    Previous studies indicate that conscious face perception may be related to neural activity in a large time window around 170-800ms after stimulus presentation, yet in the majority of these studies changes in conscious experience are confounded with changes in physical stimulation. Using multivariate classification on MEG data recorded when participants reported changes in conscious perception evoked by binocular rivalry between a face and a grating, we showed that only MEG signals in the 120-320ms time range, peaking at the M170 around 180ms and the P2m at around 260ms, reliably predicted conscious experience. Conscious perception could not only be decoded significantly better than chance from the sensors that showed the largest average difference, as previous studies suggest, but also from patterns of activity across groups of occipital sensors that individually were unable to predict perception better than chance. Additionally, source space analyses showed that sources in the early and late visual system predicted conscious perception more accurately than frontal and parietal sites, although conscious perception could also be decoded there. Finally, the patterns of neural activity associated with conscious face perception generalized from one participant to another around the times of maximum prediction accuracy. Our work thus demonstrates that the neural correlates of particular conscious contents (here, faces) are highly consistent in time and space within individuals and that these correlates are shared to some extent between individuals. PMID:23281780

  18. Support for fast comprehension of ICU data: visualization using metaphor graphics.

    PubMed

    Horn, W; Popow, C; Unterasinger, L

    2001-01-01

    The time-oriented analysis of electronic patient records on (neonatal) intensive care units is a tedious and time-consuming task. Graphic data visualization should make it easier for physicians to assess the overall situation of a patient and to recognize essential changes over time. Metaphor graphics are used to sketch the most relevant parameters for characterizing a patient's situation. By repetition of the graphic object in 24 frames the situation of the ICU patient is presented in one display, usually summarizing the last 24 h. VIE-VISU is a data visualization system which uses multiples to present the change in the patient's status over time in graphic form. Each multiple is a highly structured metaphor graphic object. Each object visualizes important ICU parameters from circulation, ventilation, and fluid balance. The design using multiples promotes a focus on stability and change. A stable patient is recognizable at first sight, continuous improvement or worsening condition are easy to analyze, drastic changes in the patient's situation get the viewers attention immediately.

  19. Trapezius muscle activity increases during near work activity regardless of accommodation/vergence demand level.

    PubMed

    Richter, H O; Zetterberg, C; Forsman, M

    2015-07-01

    To investigate if trapezius muscle activity increases over time during visually demanding near work. The vision task consisted of sustained focusing on a contrast-varying black and white Gabor grating. Sixty-six participants with a median age of 38 (range 19-47) fixated the grating from a distance of 65 cm (1.5 D) during four counterbalanced 7-min periods: binocularly through -3.5 D lenses, and monocularly through -3.5 D, 0 D and +3.5 D. Accommodation, heart rate variability and trapezius muscle activity were recorded in parallel. General estimating equation analyses showed that trapezius muscle activity increased significantly over time in all four lens conditions. A concurrent effect of accommodation response on trapezius muscle activity was observed with the minus lenses irrespective of whether incongruence between accommodation and convergence was present or not. Trapezius muscle activity increased significantly over time during the near work task. The increase in muscle activity over time may be caused by an increased need of mental effort and visual attention to maintain performance during the visual tasks to counteract mental fatigue.

  20. The Effect of Temporal Perception on Weight Perception

    PubMed Central

    Kambara, Hiroyuki; Shin, Duk; Kawase, Toshihiro; Yoshimura, Natsue; Akahane, Katsuhito; Sato, Makoto; Koike, Yasuharu

    2013-01-01

    A successful catch of a falling ball requires an accurate estimation of the timing for when the ball hits the hand. In a previous experiment in which participants performed ball-catching task in virtual reality environment, we accidentally found that the weight of a falling ball was perceived differently when the timing of ball load force to the hand was shifted from the timing expected from visual information. Although it is well known that spatial information of an object, such as size, can easily deceive our perception of its heaviness, the relationship between temporal information and perceived heaviness is still not clear. In this study, we investigated the effect of temporal factors on weight perception. We conducted ball-catching experiments in a virtual environment where the timing of load force exertion was shifted away from the visual contact timing (i.e., time when the ball hit the hand in the display). We found that the ball was perceived heavier when force was applied earlier than visual contact and lighter when force was applied after visual contact. We also conducted additional experiments in which participants were conditioned to one of two constant time offsets prior to testing weight perception. After performing ball-catching trials with 60 ms advanced or delayed load force exertion, participants’ subjective judgment on the simultaneity of visual contact and force exertion changed, reflecting a shift in perception of time offset. In addition, timing of catching motion initiation relative to visual contact changed, reflecting a shift in estimation of force timing. We also found that participants began to perceive the ball as lighter after conditioning to 60 ms advanced offset and heavier after the 60 ms delayed offset. These results suggest that perceived heaviness depends not on the actual time offset between force exertion and visual contact but on the subjectively perceived time offset between them and/or estimation error in force timing. PMID:23450805

  1. The Effect of Chronic Alprazolam Intake on Memory, Attention, and Psychomotor Performance in Healthy Human Male Volunteers.

    PubMed

    Chowdhury, Zahid Sadek; Morshed, Mohammed Monzur; Shahriar, Mohammad; Bhuiyan, Mohiuddin Ahmed; Islam, Sardar Mohd Ashraful; Bin Sayeed, Muhammad Shahdaat

    2016-01-01

    Alprazolam is used as an anxiolytic drug for generalized anxiety disorder and it has been reported to produce sedation and anterograde amnesia. In the current study, we randomly divided 26 healthy male volunteers into two groups: one group taking alprazolam 0.5 mg and the other taking placebo daily for two weeks. We utilized the Cambridge Neuropsychological Test Automated Battery (CANTAB) software to assess the chronic effect of alprazolam. We selected Paired Associates Learning (PAL) and Delayed Matching to Sample (DMS) tests for memory, Rapid Visual Information Processing (RVP) for attention, and Choice Reaction Time (CRT) for psychomotor performance twice: before starting the treatment and after the completion of the treatment. We found statistically significant impairment of visual memory in one parameter of PAL and three parameters of DMS in alprazolam group. The PAL mean trial to success and total correct matching in 0-second delay, 4-second delay, and all delay situation of DMS were impaired in alprazolam group. RVP total hits after two weeks of alprazolam treatment were improved in alprazolam group. But such differences were not observed in placebo group. In our study, we found that chronic administration of alprazolam affects memory but attentive and psychomotor performance remained unaffected.

  2. The Effect of Chronic Alprazolam Intake on Memory, Attention, and Psychomotor Performance in Healthy Human Male Volunteers

    PubMed Central

    Chowdhury, Zahid Sadek; Morshed, Mohammed Monzur; Shahriar, Mohammad; Bhuiyan, Mohiuddin Ahmed; Islam, Sardar Mohd. Ashraful

    2016-01-01

    Alprazolam is used as an anxiolytic drug for generalized anxiety disorder and it has been reported to produce sedation and anterograde amnesia. In the current study, we randomly divided 26 healthy male volunteers into two groups: one group taking alprazolam 0.5 mg and the other taking placebo daily for two weeks. We utilized the Cambridge Neuropsychological Test Automated Battery (CANTAB) software to assess the chronic effect of alprazolam. We selected Paired Associates Learning (PAL) and Delayed Matching to Sample (DMS) tests for memory, Rapid Visual Information Processing (RVP) for attention, and Choice Reaction Time (CRT) for psychomotor performance twice: before starting the treatment and after the completion of the treatment. We found statistically significant impairment of visual memory in one parameter of PAL and three parameters of DMS in alprazolam group. The PAL mean trial to success and total correct matching in 0-second delay, 4-second delay, and all delay situation of DMS were impaired in alprazolam group. RVP total hits after two weeks of alprazolam treatment were improved in alprazolam group. But such differences were not observed in placebo group. In our study, we found that chronic administration of alprazolam affects memory but attentive and psychomotor performance remained unaffected. PMID:27462136

  3. Effects of environmental changes in a stair climbing intervention: generalization to stair descent.

    PubMed

    Webb, Oliver J; Eves, Frank F

    2007-01-01

    Visual improvements have been shown to encourage stair use in worksites independently of written prompts. This study examined whether visual modifications alone can influence behavior in a shopping mall. Climbing one flight of stairs, however, will not confer health benefits. Therefore, this study also assessed whether exposure to the intervention encouraged subsequent stair use. Interrupted time-series design. Escalators flanked by a staircase on either side. Ascending and descending pedestrians (N = 81,948). Following baseline monitoring, a colorful design was introduced on the stair risers of one staircase (the target staircase). A health promotion message was superimposed later on top. The intervention was visible only to ascending pedestrians. Thus, any rise in descending stair use would indicate increased intention to use stairs, which endured after initial exposure to the intervention. Observers inconspicuously coded pedestrians' means of ascent/descent and demographic characteristics. The design alone had no meaningful impact. Addition of the message, however, increased stair climbing at the target and nontarget staircases by 190% and 52%, respectively. The message also produced a modest increase in stair descent at the target (25%) and nontarget (9%) staircases. In public venues, a message component is critical to the success of interventions. In addition, it appears that exposure to an intervention can encourage pedestrians to use stairs on a subsequent occasion.

  4. Validating the random search model for two targets of different difficulty.

    PubMed

    Chan, Alan H S; Yu, Ruifeng

    2010-02-01

    A random visual search model was fitted to 1,788 search times obtained from a nonidentical double-target search task. 30 Hong Kong Chinese (13 men, 17 women) ages 18 to 33 years (M = 23, SD = 6.8) took part in the experiment voluntarily. The overall adequacy and prediction accuracy of the model for various search time parameters (mean and median search times and response times) for both individual and pooled data show that search strategy may reasonably be inferred from search time distributions. The results also suggested the general applicability of the random search model for describing the search behavior of a large number of participants performing the type of search used here, as well as the practical feasibility of its application for determination of stopping policy for optimization of an inspection system design. Although the data generally conformed to the model the search for the more difficult target was faster than expected. The more difficult target was usually detected after the easier target and it is suggested that some degree of memory-guided searching may have been used for the second target. Some abnormally long search times were observed and it is possible that these might have been due to the characteristics of visual lobes, nonoptimum interfixation distances and inappropriate overlapping of lobes, as has been previously reported.

  5. Visual Stimuli Induce Waves of Electrical Activity in Turtle Cortex

    NASA Astrophysics Data System (ADS)

    Prechtl, J. C.; Cohen, L. B.; Pesaran, B.; Mitra, P. P.; Kleinfeld, D.

    1997-07-01

    The computations involved in the processing of a visual scene invariably involve the interactions among neurons throughout all of visual cortex. One hypothesis is that the timing of neuronal activity, as well as the amplitude of activity, provides a means to encode features of objects. The experimental data from studies on cat [Gray, C. M., Konig, P., Engel, A. K. & Singer, W. (1989) Nature (London) 338, 334-337] support a view in which only synchronous (no phase lags) activity carries information about the visual scene. In contrast, theoretical studies suggest, on the one hand, the utility of multiple phases within a population of neurons as a means to encode independent visual features and, on the other hand, the likely existence of timing differences solely on the basis of network dynamics. Here we use widefield imaging in conjunction with voltage-sensitive dyes to record electrical activity from the virtually intact, unanesthetized turtle brain. Our data consist of single-trial measurements. We analyze our data in the frequency domain to isolate coherent events that lie in different frequency bands. Low frequency oscillations (<5 Hz) are seen in both ongoing activity and activity induced by visual stimuli. These oscillations propagate parallel to the afferent input. Higher frequency activity, with spectral peaks near 10 and 20 Hz, is seen solely in response to stimulation. This activity consists of plane waves and spiral-like waves, as well as more complex patterns. The plane waves have an average phase gradient of ≈ π /2 radians/mm and propagate orthogonally to the low frequency waves. Our results show that large-scale differences in neuronal timing are present and persistent during visual processing.

  6. Sex differences in visual attention to sexually explicit videos: a preliminary study.

    PubMed

    Tsujimura, Akira; Miyagawa, Yasushi; Takada, Shingo; Matsuoka, Yasuhiro; Takao, Tetsuya; Hirai, Toshiaki; Matsushita, Masateru; Nonomura, Norio; Okuyama, Akihiko

    2009-04-01

    Although men appear to be more interested in sexual stimuli than women, this difference is not completely understood. Eye-tracking technology has been used to investigate visual attention to still sexual images; however, it has not been applied to moving sexual images. To investigate whether sex difference exists in visual attention to sexual videos. Eleven male and 11 female healthy volunteers were studied by our new methodology. The subjects viewed two sexual videos (one depicting sexual intercourse and one not) in which several regions were designated for eye-gaze analysis in each frame. Visual attention was measured across each designated region according to gaze duration. Sex differences, the region attracting the most attention, and visually favored sex were evaluated. In the nonintercourse clip, gaze time for the face and body of the actress was significantly shorter among women than among men. Gaze time for the face and body of the actor and nonhuman regions was significantly longer for women than men. The region attracting the most attention was the face of the actress for both men and women. Men viewed the opposite sex for a significantly longer period than did women, and women viewed their own sex for a significantly longer period than did men. However, gaze times for the clip showing intercourse were not significantly different between sexes. A sex difference existed in visual attention to a sexual video without heterosexual intercourse; men viewed the opposite sex for longer periods than did women, and women viewed the same sex for longer periods than did men. There was no statistically significant sex difference in viewing patterns in a sexual video showing heterosexual intercourse, and we speculate that men and women may have similar visual attention patterns if the sexual stimuli are sufficiently explicit.

  7. Stereoscopic augmented reality for laparoscopic surgery.

    PubMed

    Kang, Xin; Azizian, Mahdi; Wilson, Emmanuel; Wu, Kyle; Martin, Aaron D; Kane, Timothy D; Peters, Craig A; Cleary, Kevin; Shekhar, Raj

    2014-07-01

    Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and expand the capacity of minimally invasive laparoscopic surgeries.

  8. Visual stimuli induce waves of electrical activity in turtle cortex

    PubMed Central

    Prechtl, J. C.; Cohen, L. B.; Pesaran, B.; Mitra, P. P.; Kleinfeld, D.

    1997-01-01

    The computations involved in the processing of a visual scene invariably involve the interactions among neurons throughout all of visual cortex. One hypothesis is that the timing of neuronal activity, as well as the amplitude of activity, provides a means to encode features of objects. The experimental data from studies on cat [Gray, C. M., Konig, P., Engel, A. K. & Singer, W. (1989) Nature (London) 338, 334–337] support a view in which only synchronous (no phase lags) activity carries information about the visual scene. In contrast, theoretical studies suggest, on the one hand, the utility of multiple phases within a population of neurons as a means to encode independent visual features and, on the other hand, the likely existence of timing differences solely on the basis of network dynamics. Here we use widefield imaging in conjunction with voltage-sensitive dyes to record electrical activity from the virtually intact, unanesthetized turtle brain. Our data consist of single-trial measurements. We analyze our data in the frequency domain to isolate coherent events that lie in different frequency bands. Low frequency oscillations (<5 Hz) are seen in both ongoing activity and activity induced by visual stimuli. These oscillations propagate parallel to the afferent input. Higher frequency activity, with spectral peaks near 10 and 20 Hz, is seen solely in response to stimulation. This activity consists of plane waves and spiral-like waves, as well as more complex patterns. The plane waves have an average phase gradient of ≈π/2 radians/mm and propagate orthogonally to the low frequency waves. Our results show that large-scale differences in neuronal timing are present and persistent during visual processing. PMID:9207142

  9. A conditioned visual orientation requires the ellipsoid body in Drosophila

    PubMed Central

    Guo, Chao; Du, Yifei; Yuan, Deliang; Li, Meixia; Gong, Haiyun; Gong, Zhefeng

    2015-01-01

    Orientation, the spatial organization of animal behavior, is an essential faculty of animals. Bacteria and lower animals such as insects exhibit taxis, innate orientation behavior, directly toward or away from a directional cue. Organisms can also orient themselves at a specific angle relative to the cues. In this study, using Drosophila as a model system, we established a visual orientation conditioning paradigm based on a flight simulator in which a stationary flying fly could control the rotation of a visual object. By coupling aversive heat shocks to a fly's orientation toward one side of the visual object, we found that the fly could be conditioned to orientate toward the left or right side of the frontal visual object and retain this conditioned visual orientation. The lower and upper visual fields have different roles in conditioned visual orientation. Transfer experiments showed that conditioned visual orientation could generalize between visual targets of different sizes, compactness, or vertical positions, but not of contour orientation. Rut—Type I adenylyl cyclase and Dnc—phosphodiesterase were dispensable for visual orientation conditioning. Normal activity and scb signaling in R3/R4d neurons of the ellipsoid body were required for visual orientation conditioning. Our studies established a visual orientation conditioning paradigm and examined the behavioral properties and neural circuitry of visual orientation, an important component of the insect's spatial navigation. PMID:25512578

  10. Validation of the Preverbal Visual Assessment (PreViAs) questionnaire.

    PubMed

    García-Ormaechea, Inés; González, Inmaculada; Duplá, María; Andres, Eva; Pueyo, Victoria

    2014-10-01

    Visual cognitive integrative functions need to be evaluated by a behavioral assessment, which requires an experienced evaluator. The Preverbal Visual Assessment (PreViAs) questionnaire was designed to evaluate these functions, both in general pediatric population or in children with high risk of visual cognitive problems, through primary caregivers' answers. We aimed to validate the PreViAs questionnaire by comparing caregiver reports with results from a comprehensive clinical protocol. A total of 220 infants (<2 years old) were divided into two groups according to visual development, as determined by the clinical protocol. Their primary caregivers completed the PreViAs questionnaire, which consists of 30 questions related to one or more visual domains: visual attention, visual communication, visual-motor coordination, and visual processing. Questionnaire answers were compared with results of behavioral assessments performed by three pediatric ophthalmologists. Results of the clinical protocol classified 128 infants as having normal visual maturation, and 92 as having abnormal visual maturation. The specificity of PreViAs questionnaire was >80%, and sensitivity was 64%-79%. More than 80% of the infants were correctly classified, and test-retest reliability exceeded 0.9 for all domains. The PreViAs questionnaire is useful to detect abnormal visual maturation in infants from birth to 24months of age. It improves the anamnesis process in infants at risk of visual dysfunctions. Copyright © 2014. Published by Elsevier Ireland Ltd.

  11. Auditory, visual, and bimodal data link displays and how they support pilot performance.

    PubMed

    Steelman, Kelly S; Talleur, Donald; Carbonari, Ronald; Yamani, Yusuke; Nunes, Ashley; McCarley, Jason S

    2013-06-01

    The design of data link messaging systems to ensure optimal pilot performance requires empirical guidance. The current study examined the effects of display format (auditory, visual, or bimodal) and visual display position (adjacent to instrument panel or mounted on console) on pilot performance. Subjects performed five 20-min simulated single-pilot flights. During each flight, subjects received messages from a simulated air traffic controller. Messages were delivered visually, auditorily, or bimodally. Subjects were asked to read back each message aloud and then perform the instructed maneuver. Visual and bimodal displays engendered lower subjective workload and better altitude tracking than auditory displays. Readback times were shorter with the two unimodal visual formats than with any of the other three formats. Advantages for the unimodal visual format ranged in size from 2.8 s to 3.8 s relative to the bimodal upper left and auditory formats, respectively. Auditory displays allowed slightly more head-up time (3 to 3.5 seconds per minute) than either visual or bimodal displays. Position of the visual display had only modest effects on any measure. Combined with the results from previous studies by Helleberg and Wickens and Lancaster and Casali the current data favor visual and bimodal displays over auditory displays; unimodal auditory displays were favored by only one measure, head-up time, and only very modestly. Data evinced no statistically significant effects of visual display position on performance, suggesting that, contrary to expectations, the placement of a visual data link display may be of relatively little consequence to performance.

  12. Maximizing Impact: Pairing interactive web visualizations with traditional print media

    NASA Astrophysics Data System (ADS)

    Read, E. K.; Appling, A.; Carr, L.; De Cicco, L.; Read, J. S.; Walker, J. I.; Winslow, L. A.

    2016-12-01

    Our Nation's rapidly growing store of environmental data makes new demands on researchers: to take on increasingly broad-scale, societally relevant analyses and to rapidly communicate findings to the public. Interactive web-based data visualizations now commonly supplement or comprise journalism, and science journalism has followed suit. To maximize the impact of US Geological Survey (USGS) science, the USGS Office of Water Information Data Science team builds tools and products that combine traditional static research products (e.g., print journal articles) with web-based, interactive data visualizations that target non-scientific audiences. We developed a lightweight, open-source framework for web visualizations to reduce time to production. The framework provides templates for a data visualization workflow and the packaging of text, interactive figures, and images into an appealing web interface with standardized look and feel, usage tracking, and responsiveness. By partnering with subject matter experts to focus on timely, societally relevant issues, we use these tools to produce appealing visual stories targeting specific audiences, including managers, the general public, and scientists, on diverse topics including drought, microplastic pollution, and fisheries response to climate change. We will describe the collaborative and technical methodologies used; describe some examples of how it's worked; and challenges and opportunities for the future.

  13. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung [Richland, WA; Jurrus, Elizabeth R [Kennewick, WA; Cowley, Wendy E [Benton City, WA; Foote, Harlan P [Richland, WA; Thomas, James J [Richland, WA

    2011-12-06

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  14. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung [Richland, WA; Jurrus, Elizabeth R [Kennewick, WA; Cowley, Wendy E [Benton City, WA; Foote, Harlan P [Richland, WA; Thomas, James J [Richland, WA

    2009-05-26

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  15. A Visual Analytics Paradigm Enabling Trillion-Edge Graph Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.; Haglin, David J.; Gillen, David S.

    We present a visual analytics paradigm and a system prototype for exploring web-scale graphs. A web-scale graph is described as a graph with ~one trillion edges and ~50 billion vertices. While there is an aggressive R&D effort in processing and exploring web-scale graphs among internet vendors such as Facebook and Google, visualizing a graph of that scale still remains an underexplored R&D area. The paper describes a nontraditional peek-and-filter strategy that facilitates the exploration of a graph database of unprecedented size for visualization and analytics. We demonstrate that our system prototype can 1) preprocess a graph with ~25 billion edgesmore » in less than two hours and 2) support database query and visualization on the processed graph database afterward. Based on our computational performance results, we argue that we most likely will achieve the one trillion edge mark (a computational performance improvement of 40 times) for graph visual analytics in the near future.« less

  16. Fixating at far distance shortens reaction time to peripheral visual stimuli at specific locations.

    PubMed

    Kokubu, Masahiro; Ando, Soichi; Oda, Shingo

    2018-01-18

    The purpose of the present study was to examine whether the fixation distance in real three-dimensional space affects manual reaction time to peripheral visual stimuli. Light-emitting diodes were used for presenting a fixation point and four peripheral visual stimuli. The visual stimuli were located at a distance of 45cm and at 25° in the left, right, upper, and lower directions from the sagittal axis including the fixation point. Near (30cm), Middle (45cm), Far (90cm), and Very Far (300cm) fixation distance conditions were used. When one of the four visual stimuli was randomly illuminated, the participants released a button as quickly as possible. Results showed that overall peripheral reaction time decreased as the fixation distance increased. The significant interaction between fixation distance and stimulus location indicated that the effect of fixation distance on reaction time was observed at the left, right, and upper locations but not at the lower location. These results suggest that fixating at far distance would contribute to faster reaction and that the effect is specific to locations in the peripheral visual field. The present findings are discussed in terms of viewer-centered representation, the focus of attention in depth, and visual field asymmetry related to neurological and psychological aspects. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Primary Visual Cortex as a Saliency Map: A Parameter-Free Prediction and Its Test by Behavioral Data

    PubMed Central

    Zhaoping, Li; Zhe, Li

    2015-01-01

    It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention. PMID:26441341

  18. Interaction of hypertension and age in visual selective attention performance.

    PubMed

    Madden, D J; Blumenthal, J A

    1998-01-01

    Previous research suggests that some aspects of cognitive performance decline as a joint function of age and hypertension. In this experiment, 51 unmedicated individuals with mild essential hypertension and 48 normotensive individuals, 18-78 years of age, performed a visual search task. The estimated time required to identify a display character and shift attention between display positions increased with age. This attention shift time did not differ significantly between hypertensive and normotensive participants, but regression analyses indicated some mediation of the age effect by blood pressure. For individuals less than 60 years of age, the error rate was greater for hypertensive than for normotensive participants. Although the present design could detect effects of only moderate to large size, the results suggest that effects of hypertension may be more evident in a relatively general measure of performance (mean error rate) than in the speed of shifting visual attention.

  19. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  20. A Comprehensive Optimization Strategy for Real-time Spatial Feature Sharing and Visual Analytics in Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Li, W.; Shao, H.

    2017-12-01

    For geospatial cyberinfrastructure enabled web services, the ability of rapidly transmitting and sharing spatial data over the Internet plays a critical role to meet the demands of real-time change detection, response and decision-making. Especially for the vector datasets which serve as irreplaceable and concrete material in data-driven geospatial applications, their rich geometry and property information facilitates the development of interactive, efficient and intelligent data analysis and visualization applications. However, the big-data issues of vector datasets have hindered their wide adoption in web services. In this research, we propose a comprehensive optimization strategy to enhance the performance of vector data transmitting and processing. This strategy combines: 1) pre- and on-the-fly generalization, which automatically determines proper simplification level through the introduction of appropriate distance tolerance (ADT) to meet various visualization requirements, and at the same time speed up simplification efficiency; 2) a progressive attribute transmission method to reduce data size and therefore the service response time; 3) compressed data transmission and dynamic adoption of a compression method to maximize the service efficiency under different computing and network environments. A cyberinfrastructure web portal was developed for implementing the proposed technologies. After applying our optimization strategies, substantial performance enhancement is achieved. We expect this work to widen the use of web service providing vector data to support real-time spatial feature sharing, visual analytics and decision-making.

  1. A computational theory of visual receptive fields.

    PubMed

    Lindeberg, Tony

    2013-12-01

    A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative agreement are obtained for (i) spatial on-center/off-surround and off-center/on-surround receptive fields in the fovea and the LGN, (ii) simple cells with spatial directional preference in V1, (iii) spatio-chromatic double-opponent neurons in V1, (iv) space-time separable spatio-temporal receptive fields in the LGN and V1, and (v) non-separable space-time tilted receptive fields in V1, all within the same unified theory. In addition, the paper presents a more general framework for relating and interpreting these receptive fields conceptually and possibly predicting new receptive field profiles as well as for pre-wiring covariance under scaling, affine, and Galilean transformations into the representations of visual stimuli. This paper describes the basic structure of the necessity results concerning receptive field profiles regarding the mathematical foundation of the theory and outlines how the proposed theory could be used in further studies and modelling of biological vision. It is also shown how receptive field responses can be interpreted physically, as the superposition of relative variations of surface structure and illumination variations, given a logarithmic brightness scale, and how receptive field measurements will be invariant under multiplicative illumination variations and exposure control mechanisms.

  2. X-33 Flight Visualization

    NASA Technical Reports Server (NTRS)

    Laue, Jay H.

    1998-01-01

    The X-33 flight visualization effort has resulted in the integration of high-resolution terrain data with vehicle position and attitude data for planned flights of the X-33 vehicle from its launch site at Edwards AFB, California, to landings at Michael Army Air Field, Utah, and Maelstrom AFB, Montana. Video and Web Site representations of these flight visualizations were produced. In addition, a totally new module was developed to control viewpoints in real-time using a joystick input. Efforts have been initiated, and are presently being continued, for real-time flight coverage visualizations using the data streams from the X-33 vehicle flights. The flight visualizations that have resulted thus far give convincing support to the expectation that the flights of the X-33 will be exciting and significant space flight milestones... flights of this nation's one-half scale predecessor to its first single-stage-to-orbit, fully-reusable launch vehicle system.

  3. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    PubMed Central

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  4. DVA as a Diagnostic Test for Vestibulo-Ocular Reflex Function

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Appelbaum, Meghan

    2010-01-01

    The vestibulo-ocular reflex (VOR) stabilizes vision on earth-fixed targets by eliciting eyes movements in response to changes in head position. How well the eyes perform this task can be functionally measured by the dynamic visual acuity (DVA) test. We designed a passive, horizontal DVA test to specifically study the acuity and reaction time when looking in different target locations. Visual acuity was compared among 12 subjects using a standard Landolt C wall chart, a computerized static (no rotation) acuity test and dynamic acuity test while oscillating at 0.8 Hz (+/-60 deg/s). In addition, five trials with yaw oscillation randomly presented a visual target in one of nine different locations with the size and presentation duration of the visual target varying across trials. The results showed a significant difference between the static and dynamic threshold acuities as well as a significant difference between the visual targets presented in the horizontal plane versus those in the vertical plane when comparing accuracy of vision and reaction time of the response. Visual acuity increased proportional to the size of the visual target and increased between 150 and 300 msec duration. We conclude that dynamic visual acuity varies with target location, with acuity optimized for targets in the plane of rotation. This DVA test could be used as a functional diagnostic test for visual-vestibular and neuro-cognitive impairments by assessing both accuracy and reaction time to acquire visual targets.

  5. Cellular computational generalized neuron network for frequency situational intelligence in a multi-machine power system.

    PubMed

    Wei, Yawei; Venayagamoorthy, Ganesh Kumar

    2017-09-01

    To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Making perceptual learning practical to improve visual functions.

    PubMed

    Polat, Uri

    2009-10-01

    Task-specific improvement in performance after training is well established. The finding that learning is stimulus-specific and does not transfer well between different stimuli, between stimulus locations in the visual field, or between the two eyes has been used to support the notion that neurons or assemblies of neurons are modified at the earliest stage of cortical processing. However, a debate regarding the proposed mechanism underlying perceptual learning is an ongoing issue. Nevertheless, generalization of a trained task to other functions is an important key, for both understanding the neural mechanisms and the practical value of the training. This manuscript describes a structured perceptual learning method that previously used (amblyopia, myopia) and a novel technique and results that were applied for presbyopia. In general, subjects were trained for contrast detection of Gabor targets under lateral masking conditions. Training improved contrast sensitivity and diminished the lateral suppression when it existed (amblyopia). The improvement was transferred to unrelated functions such as visual acuity. The new results of presbyopia show substantial improvement of the spatial and temporal contrast sensitivity, leading to improved processing speed of target detection as well as reaction time. Consequently, the subjects, who were able to eliminate the need for reading glasses, benefited. Thus, here we show that the transfer of functions indicates that the specificity of improvement in the trained task can be generalized by repetitive practice of target detection, covering a sufficient range of spatial frequencies and orientations, leading to an improvement in unrelated visual functions. Thus, perceptual learning can be a practical method to improve visual functions in people with impaired or blurred vision.

  7. Volunteer Recording Program Manual.

    ERIC Educational Resources Information Center

    Arizona Braille and Talking Book Library, Phoenix.

    This manual for volunteers begins with a brief introduction to Arizona's Library for the Blind and Physically Handicapped, which is one of 56 libraries appointed by the Librarian of Congress to provide public library service to persons with visual or physical impairments. Introductory materials include explanations of the general policies and…

  8. Enhancement of Cognitive Processing by Multiple Sclerosis Patients Using Liquid Cooling Technology: A Case Study

    NASA Technical Reports Server (NTRS)

    Montgomery, Leslie D.; Ku, Yu-Tsuan E.; Montgomery, Richard W.; Kliss, Mark (Technical Monitor)

    1997-01-01

    Recent neuropsychological studies demonstrate that cognitive dysfunction is a common symptom in patients with multiple sclerosis. In many cases the presence of cognitive impairment affects the patient's daily activities to a greater extent than would be found due to their physical disability alone. Cognitive dysfunction can have a significant impact on the quality of life of both the patient and that of their primary caregiver. Two cognitively impaired male MS patients were given a visual discrimination task before and after a one hour cooling period. The subjects were presented a series of either red or blue circles or triangles. One of these combinations, or one fourth of the stimuli, was designated as the "target" presentation. EEG was recorded from 20 scalp electrodes using a Tracor Northern 7500 EEG/ERP system. Oral and ear temperatures were obtained and recorded manually every five minutes during the one hour cooling period. The EEG ERP signatures from each series of stimuli were analyzed in the energy density domain to determine the locus of neural activity at each EEG sampling time. The first subject's ear temperature did not decrease during the cooling period. It was actually elevated approximately 0.05 C by the end of the cooling period compared to his mean of control period value. In turn, Subject One's discrimination performance and cortical energy remained essentially the same after body cooling. In contrast, Subject Two's ear temperature decreased approx. 0.8 C during his cooling period. Subject Two's ERROR score decreased from 12 during the precooling control period to 2 after cooling. His ENERGY value increased approximately 300%, from a precooling value of approximately 200 to a postcooling value of nearly 600. These findings might be interpreted by the following three-part hypothesis: (1) the general cognitive impairment of MS patients may be a result of low or unfocused metabolic energy conversion in the cortex; (2) such differences show up most strongly in reduced energy in the occipital region during the initial processing of the precooling period visual stimulus which may indicate impaired early visual processing; and (3) increased postcooling activation in the le ft angular gyrus may result in enhanced higher-level reasoning related to processing visual task information. By this hypothesis the superior performance of Subject Two following body cooling may be a result of increased neural activation in his early visual recognition and processing centers.

  9. Towards a New Generation of Time-Series Visualization Tools in the ESA Heliophysics Science Archives

    NASA Astrophysics Data System (ADS)

    Perez, H.; Martinez, B.; Cook, J. P.; Herment, D.; Fernandez, M.; De Teodoro, P.; Arnaud, M.; Middleton, H. R.; Osuna, P.; Arviset, C.

    2017-12-01

    During the last decades a varied set of Heliophysics missions have allowed the scientific community to gain a better knowledge on the solar atmosphere and activity. The remote sensing images of missions such as SOHO have paved the ground for Helio-based spatial data visualization software such as JHelioViewer/Helioviewer. On the other hand, the huge amount of in-situ measurements provided by other missions such as Cluster provide a wide base for plot visualization software whose reach is still far from being fully exploited. The Heliophysics Science Archives within the ESAC Science Data Center (ESDC) already provide a first generation of tools for time-series visualization focusing on each mission's needs: visualization of quicklook plots, cross-calibration time series, pre-generated/on-demand multi-plot stacks (Cluster), basic plot zoom in/out options (Ulysses) and easy navigation through the plots in time (Ulysses, Cluster, ISS-Solaces). However, as the needs evolve and the scientists involved in new missions require to plot multi-variable data, heat maps stacks interactive synchronization and axis variable selection among other improvements. The new Heliophysics archives (such as Solar Orbiter) and the evolution of existing ones (Cluster) intend to address these new challenges. This paper provides an overview of the different approaches for visualizing time-series followed within the ESA Heliophysics Archives and their foreseen evolution.

  10. Improving the performance of the amblyopic visual system

    PubMed Central

    Levi, Dennis M.; Li, Roger W.

    2008-01-01

    Experience-dependent plasticity is closely linked with the development of sensory function; however, there is also growing evidence for plasticity in the adult visual system. This review re-examines the notion of a sensitive period for the treatment of amblyopia in the light of recent experimental and clinical evidence for neural plasticity. One recently proposed method for improving the effectiveness and efficiency of treatment that has received considerable attention is ‘perceptual learning’. Specifically, both children and adults with amblyopia can improve their perceptual performance through extensive practice on a challenging visual task. The results suggest that perceptual learning may be effective in improving a range of visual performance and, importantly, the improvements may transfer to visual acuity. Recent studies have sought to explore the limits and time course of perceptual learning as an adjunct to occlusion and to investigate the neural mechanisms underlying the visual improvement. These findings, along with the results of new clinical trials, suggest that it might be time to reconsider our notions about neural plasticity in amblyopia. PMID:19008199

  11. Stereoscopic applications for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2007-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  12. Subjective Estimation of Task Time and Task Difficulty of Simple Movement Tasks.

    PubMed

    Chan, Alan H S; Hoffmann, Errol R

    2017-01-01

    It has been demonstrated in previous work that the same neural structures are used for both imagined and real movements. To provide a strong test of the similarity of imagined and actual movement times, 4 simple movement tasks were used to determine the relationship between estimated task time and actual movement time. The tasks were single-component visually controlled movements, 2-component visually controlled, low index of difficulty (ID) moves and pin-to-hole transfer movements. For each task there was good correspondence between the mean estimated times and actual movement times. In all cases, the same factors determined the actual and estimated movement times: the amplitudes of movement and the IDs of the component movements, however the contribution of each of these variables differed for the imagined and real tasks. Generally, the standard deviations of the estimated times were linearly related to the estimated time values. Overall, the data provide strong evidence for the same neural structures being used for both imagined and actual movements.

  13. Toward a hybrid brain-computer interface based on imagined movement and visual attention

    NASA Astrophysics Data System (ADS)

    Allison, B. Z.; Brunner, C.; Kaiser, V.; Müller-Putz, G. R.; Neuper, C.; Pfurtscheller, G.

    2010-04-01

    Brain-computer interface (BCI) systems do not work for all users. This article introduces a novel combination of tasks that could inspire BCI systems that are more accurate than conventional BCIs, especially for users who cannot attain accuracy adequate for effective communication. Subjects performed tasks typically used in two BCI approaches, namely event-related desynchronization (ERD) and steady state visual evoked potential (SSVEP), both individually and in a 'hybrid' condition that combines both tasks. Electroencephalographic (EEG) data were recorded across three conditions. Subjects imagined moving the left or right hand (ERD), focused on one of the two oscillating visual stimuli (SSVEP), and then simultaneously performed both tasks. Accuracy and subjective measures were assessed. Offline analyses suggested that half of the subjects did not produce brain patterns that could be accurately discriminated in response to at least one of the two tasks. If these subjects produced comparable EEG patterns when trying to use a BCI, these subjects would not be able to communicate effectively because the BCI would make too many errors. Results also showed that switching to a different task used in BCIs could improve accuracy in some of these users. Switching to a hybrid approach eliminated this problem completely, and subjects generally did not consider the hybrid condition more difficult. Results validate this hybrid approach and suggest that subjects who cannot use a BCI should consider switching to a different BCI approach, especially a hybrid BCI. Subjects proficient with both approaches might combine them to increase information throughput by improving accuracy, reducing selection time, and/or increasing the number of possible commands.

  14. A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.

    PubMed

    Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei

    2014-09-19

    Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Timing the impact of literacy on visual processing

    PubMed Central

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas

    2014-01-01

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460

  16. Timing the impact of literacy on visual processing.

    PubMed

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas

    2014-12-09

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.

  17. Estimating quality weights for EQ-5D health states with the time trade-off method in South Korea.

    PubMed

    Jo, Min-Woo; Yun, Sung-Cheol; Lee, Sang-Il

    2008-12-01

    To estimate quality weights of EQ-5D health states with the time trade-off (TTO) method in the general population of South Korea. A total of 500 respondents valued 42 hypothetical EQ-5D health states using the TTO and visual analog scale. The quality weights for all EQ-5D health states were estimated by a random effects model and compared with those from studies in other countries. Overall estimated quality weights for all EQ-5D health states from this study were highly correlated with those from previous studies, but quality weights of individual states were substantially different from those of their corresponding states in other studies. The Korean value set differed from value sets from other countries. Special caution is needed when a value set from one country is applied to another with a different culture.

  18. To what extent can we explain time trade-off values from other information about respondents?

    PubMed

    Dolan, Paul; Roberts, Jennifer

    2002-03-01

    The time trade-off (TTO) is one of the most widely used health state valuation methods and was recently used to develop a set of values for the EQ-5D descriptive system from 3000 members of the UK general population. However, there is currently very little understanding of precisely what determines responses to TTO questions. The data that were used to generate this set of values are ideal for addressing this question since they contain a plethora of information relating to the respondents and their cognition during the TTO exercise. A particularly useful characteristic of this dataset is the existence of visual analogue scale (VAS) valuations on the same states for the same respondents. The results suggest that age, sex and marital status are the most important respondent characteristics determining health state valuations. The VAS valuations were found to add very little to the explanatory power of the models.

  19. Representation of vestibular and visual cues to self-motion in ventral intraparietal (VIP) cortex

    PubMed Central

    Chen, Aihua; Deangelis, Gregory C.; Angelaki, Dora E.

    2011-01-01

    Convergence of vestibular and visual motion information is important for self-motion perception. One cortical area that combines vestibular and optic flow signals is the ventral intraparietal area (VIP). We characterized unisensory and multisensory responses of macaque VIP neurons to translations and rotations in three dimensions. Approximately half of VIP cells show significant directional selectivity in response to optic flow, half show tuning to vestibular stimuli, and one-third show multisensory responses. Visual and vestibular direction preferences of multisensory VIP neurons could be congruent or opposite. When visual and vestibular stimuli were combined, VIP responses could be dominated by either input, unlike medial superior temporal area (MSTd) where optic flow tuning typically dominates or the visual posterior sylvian area (VPS) where vestibular tuning dominates. Optic flow selectivity in VIP was weaker than in MSTd but stronger than in VPS. In contrast, vestibular tuning for translation was strongest in VPS, intermediate in VIP, and weakest in MSTd. To characterize response dynamics, direction-time data were fit with a spatiotemporal model in which temporal responses were modeled as weighted sums of velocity, acceleration, and position components. Vestibular responses in VIP reflected balanced contributions of velocity and acceleration, whereas visual responses were dominated by velocity. Timing of vestibular responses in VIP was significantly faster than in MSTd, whereas timing of optic flow responses did not differ significantly among areas. These findings suggest that VIP may be proximal to MSTd in terms of vestibular processing but hierarchically similar to MSTd in terms of optic flow processing. PMID:21849564

  20. Image enhancement of real-time television to benefit the visually impaired.

    PubMed

    Wolffsohn, James S; Mukhopadhyay, Ditipriya; Rubinstein, Martin

    2007-09-01

    To examine the use of real-time, generic edge detection, image processing techniques to enhance the television viewing of the visually impaired. Prospective, clinical experimental study. One hundred and two sequential visually impaired (average age 73.8 +/- 14.8 years; 59% female) in a single center optimized a dynamic television image with respect to edge detection filter (Prewitt, Sobel, or the two combined), color (red, green, blue, or white), and intensity (one to 15 times) of the overlaid edges. They then rated the original television footage compared with a black-and-white image displaying the edges detected and the original television image with the detected edges overlaid in the chosen color and at the intensity selected. Footage of news, an advertisement, and the end of program credits were subjectively assessed in a random order. A Prewitt filter was preferred (44%) compared with the Sobel filter (27%) or a combination of the two (28%). Green and white were equally popular for displaying the detected edges (32%), with blue (22%) and red (14%) less so. The average preferred edge intensity was 3.5 +/- 1.7 times. The image-enhanced television was significantly preferred to the original (P < .001), which in turn was preferred to viewing the detected edges alone (P < .001) for each of the footage clips. Preference was not dependent on the condition causing visual impairment. Seventy percent were definitely willing to buy a set-top box that could achieve these effects for a reasonable price. Simple generic edge detection image enhancement options can be performed on television in real-time and significantly enhance the viewing of the visually impaired.

  1. Quantifying the development of user-generated art during 2001–2010

    PubMed Central

    Yazdani, Mehrdad; Chow, Jay; Manovich, Lev

    2017-01-01

    One of the main questions in the humanities is how cultures and artistic expressions change over time. While a number of researchers have used quantitative computational methods to study historical changes in literature, music, and cinema, our paper offers the first quantitative analysis of historical changes in visual art created by users of a social online network. We propose a number of computational methods for the analysis of temporal development of art images. We then apply these methods to a sample of 270,000 artworks created between 2001 and 2010 by users of the largest social network for art—DeviantArt (www.deviantart.com). We investigate changes in subjects, techniques, sizes, proportions and also selected visual characteristics of images. Because these artworks are classified by their creators into two general categories—Traditional Art and Digital Art—we are also able to investigate if the use of digital tools has had a significant effect on the content and form of artworks. Our analysis reveals a number of gradual and systematic changes over a ten-year period in artworks belonging to both categories. PMID:28792494

  2. Quantifying the development of user-generated art during 2001-2010.

    PubMed

    Yazdani, Mehrdad; Chow, Jay; Manovich, Lev

    2017-01-01

    One of the main questions in the humanities is how cultures and artistic expressions change over time. While a number of researchers have used quantitative computational methods to study historical changes in literature, music, and cinema, our paper offers the first quantitative analysis of historical changes in visual art created by users of a social online network. We propose a number of computational methods for the analysis of temporal development of art images. We then apply these methods to a sample of 270,000 artworks created between 2001 and 2010 by users of the largest social network for art-DeviantArt (www.deviantart.com). We investigate changes in subjects, techniques, sizes, proportions and also selected visual characteristics of images. Because these artworks are classified by their creators into two general categories-Traditional Art and Digital Art-we are also able to investigate if the use of digital tools has had a significant effect on the content and form of artworks. Our analysis reveals a number of gradual and systematic changes over a ten-year period in artworks belonging to both categories.

  3. Unintentional Interpersonal Synchronization Represented as a Reciprocal Visuo-Postural Feedback System: A Multivariate Autoregressive Modeling Approach.

    PubMed

    Okazaki, Shuntaro; Hirotani, Masako; Koike, Takahiko; Bosch-Bayard, Jorge; Takahashi, Haruka K; Hashiguchi, Maho; Sadato, Norihiro

    2015-01-01

    People's behaviors synchronize. It is difficult, however, to determine whether synchronized behaviors occur in a mutual direction--two individuals influencing one another--or in one direction--one individual leading the other, and what the underlying mechanism for synchronization is. To answer these questions, we hypothesized a non-leader-follower postural sway synchronization, caused by a reciprocal visuo-postural feedback system operating on pairs of individuals, and tested that hypothesis both experimentally and via simulation. In the behavioral experiment, 22 participant pairs stood face to face either 20 or 70 cm away from each other wearing glasses with or without vision blocking lenses. The existence and direction of visual information exchanged between pairs of participants were systematically manipulated. The time series data for the postural sway of these pairs were recorded and analyzed with cross correlation and causality. Results of cross correlation showed that postural sway of paired participants was synchronized, with a shorter time lag when participant pairs could see one another's head motion than when one of the participants was blindfolded. In addition, there was less of a time lag in the observed synchronization when the distance between participant pairs was smaller. As for the causality analysis, noise contribution ratio (NCR), the measure of influence using a multivariate autoregressive model, was also computed to identify the degree to which one's postural sway is explained by that of the other's and how visual information (sighted vs. blindfolded) interacts with paired participants' postural sway. It was found that for synchronization to take place, it is crucial that paired participants be sighted and exert equal influence on one another by simultaneously exchanging visual information. Furthermore, a simulation for the proposed system with a wider range of visual input showed a pattern of results similar to the behavioral results.

  4. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Shape Recognition in Infancy: Visual Integration of Sequential Information.

    ERIC Educational Resources Information Center

    Rose, Susan A

    1988-01-01

    Investigated infants' integration of visual information across space and time. In four experiments, infants aged 12 months and 6 months viewed objects after watching light trace similar and dissimilar shapes. Infants looked longer at novel shapes, although six-month-olds did not recognize figures taking more than 10 seconds to trace. One-year-old…

  6. Parallel Consolidation of Simple Features into Visual Short-Term Memory

    ERIC Educational Resources Information Center

    Mance, Irida; Becker, Mark W.; Liu, Taosheng

    2012-01-01

    Although considerable research has examined the storage limits of visual short-term memory (VSTM), little is known about the initial formation (i.e., the consolidation) of VSTM representations. A few previous studies have estimated the capacity of consolidation to be one item at a time. Here we used a sequential-simultaneous manipulation to…

  7. Real-time visualization of immune cell clearance of Aspergillus fumigatus spores and hyphae.

    PubMed

    Knox, Benjamin P; Huttenlocher, Anna; Keller, Nancy P

    2017-08-01

    Invasive aspergillosis (IA) is a disease of the immunocompromised host and generally caused by the opportunistic fungal pathogen Aspergillus fumigatus. While both host and fungal factors contribute to disease severity and outcome, there are fundamental features of IA development including fungal morphological transition from infectious conidia to tissue-penetrating hyphae as well as host defenses rooted in mechanisms of innate phagocyte function. Here we address recent advances in the field and use real-time in vivo imaging in the larval zebrafish to visually highlight conserved vertebrate innate immune behaviors including macrophage phagocytosis of conidia and neutrophil responses post-germination. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Near-infrared intraoperative imaging during resection of an anterior mediastinal soft tissue sarcoma.

    PubMed

    Predina, Jarrod D; Newton, Andrew D; Desphande, Charuhas; Singhal, Sunil

    2018-01-01

    Sarcomas are rare malignancies that are generally treated with multimodal therapy protocols incorporating complete local resection, chemotherapy and radiation. Unfortunately, even with this aggressive approach, local recurrences are common. Near-infrared intraoperative imaging is a novel technology that provides real-time visual feedback that can improve identification of disease during resection. The presented study describes utilization of a near-infrared agent (indocyanine green) during resection of an anterior mediastinal sarcoma. Real-time fluorescent feedback provided visual information that helped the surgeon during tumor localization, margin assessment and dissection from mediastinal structures. This rapidly evolving technology may prove useful in patients with primary sarcomas arising from other locations or with other mediastinal neoplasms.

  9. Post-traumatic stress symptoms are associated with better performance on a delayed match-to-position task

    PubMed Central

    2018-01-01

    Many individuals with posttraumatic stress disorder (PTSD) report experiencing frequent intrusive memories of the original traumatic event (e.g., flashbacks). These memories can be triggered by situations or stimuli that reflect aspects of the trauma and may reflect basic processes in learning and memory, such as generalization. It is possible that, through increased generalization, non-threatening stimuli that once evoked normal memories become associated with traumatic memories. Previous research has reported increased generalization in PTSD, but the role of visual discrimination processes has not been examined. To investigate visual discrimination in PTSD, 143 participants (Veterans and civilians) self-assessed for symptom severity were grouped according to the presence of severe PTSD symptoms (PTSS) vs. few/no symptoms (noPTSS). Participants were given a visual match-to-sample pattern separation task that varied trials by spatial separation (Low, Medium, High) and temporal delays (5, 10, 20, 30 s). Unexpectedly, the PTSS group demonstrated better discrimination performance than the noPTSS group at the most difficult spatial trials (Low spatial separation). Further assessment of accuracy and reaction time using diffusion drift modeling indicated that the better performance by the PTSS group on the hardest trials was not explained by slower reaction times, but rather a faster accumulation of evidence during decision making in conjunction with a reduced threshold, indicating a tendency in the PTSS group to decide quickly rather than waiting for additional evidence to support the decision. This result supports the need for future studies examining the precise role of discrimination and generalization in PTSD, and how these cognitive processes might contribute to expression and maintenance of PTSD symptoms. PMID:29736339

  10. Social vision: sustained perceptual enhancement of affective facial cues in social anxiety

    PubMed Central

    McTeague, Lisa M.; Shumen, Joshua R.; Wieser, Matthias J.; Lang, Peter J.; Keil, Andreas

    2010-01-01

    Heightened perception of facial cues is at the core of many theories of social behavior and its disorders. In the present study, we continuously measured electrocortical dynamics in human visual cortex, as evoked by happy, neutral, fearful, and angry faces. Thirty-seven participants endorsing high versus low generalized social anxiety (upper and lower tertiles of 2,104 screened undergraduates) viewed naturalistic faces flickering at 17.5 Hz to evoke steady-state visual evoked potentials (ssVEPs), recorded from 129 scalp electrodes. Electrophysiological data were evaluated in the time-frequency domain after linear source space projection using the minimum norm method. Source estimation indicated an early visual cortical origin of the face-evoked ssVEP, which showed sustained amplitude enhancement for emotional expressions specifically in individuals with pervasive social anxiety. Participants in the low symptom group showed no such sensitivity, and a correlational analysis across the entire sample revealed a strong relationship between self-reported interpersonal anxiety/avoidance and enhanced visual cortical response amplitude for emotional, versus neutral expressions. This pattern was maintained across the 3500 ms viewing epoch, suggesting that temporally sustained, heightened perceptual bias towards affective facial cues is associated with generalized social anxiety. PMID:20832490

  11. Visual areas become less engaged in associative recall following memory stabilization.

    PubMed

    Nieuwenhuis, Ingrid L C; Takashima, Atsuko; Oostenveld, Robert; Fernández, Guillén; Jensen, Ole

    2008-04-15

    Numerous studies have focused on changes in the activity in the hippocampus and higher association areas with consolidation and memory stabilization. Even though perceptual areas are engaged in memory recall, little is known about how memory stabilization is reflected in those areas. Using magnetoencephalography (MEG) we investigated changes in visual areas with memory stabilization. Subjects were trained on associating a face to one of eight locations. The first set of associations ('stabilized') was learned in three sessions distributed over a week. The second set ('labile') was learned in one session just prior to the MEG measurement. In the recall session only the face was presented and subjects had to indicate the correct location using a joystick. The MEG data revealed robust gamma activity during recall, which started in early visual cortex and propagated to higher visual and parietal brain areas. The occipital gamma power was higher for the labile than the stabilized condition (time=0.65-0.9 s). Also the event-related field strength was higher during recall of labile than stabilized associations (time=0.59-1.5 s). We propose that recall of the spatial associations prior to memory stabilization involves a top-down process relying on reconstructing learned representations in visual areas. This process is reflected in gamma band activity consistent with the notion that neuronal synchronization in the gamma band is required for visual representations. More direct synaptic connections are formed with memory stabilization, thus decreasing the dependence on visual areas.

  12. Simulating Earthquakes for Science and Society: New Earthquake Visualizations Ideal for Use in Science Communication

    NASA Astrophysics Data System (ADS)

    de Groot, R. M.; Benthien, M. L.

    2006-12-01

    The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently have gained visibility via television news coverage in Southern California. These types of visualizations are becoming pervasive in the teaching and learning of concepts related to earth science. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin &Brick, 2002). Earthquakes are ideal candidates for visualization products: they cannot be predicted, are completed in a matter of seconds, occur deep in the earth, and the time between events can be on a geologic time scale. For example, the southern part of the San Andreas fault has not seen a major earthquake since about 1690, setting the stage for an earthquake as large as magnitude 7.7 -- the "big one." Since no one has experienced such an earthquake, visualizations can help people understand the scale of such an event. Accordingly, SCEC has developed a revolutionary simulation of this earthquake, with breathtaking visualizations that are now being distributed. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.

  13. Ask Me Anything - The Reddit Revolution and other Unconventional Ways to Communicate Science

    NASA Astrophysics Data System (ADS)

    Fiondella, F.; Kahn, B. L.; Noori, A.

    2012-12-01

    Instagram. Pinterest. SoundCloud. Storify. Almost every month there's a new platform through which institutions could potentially promote their work. If used effectively, these less-conventional means of communication can indeed be powerful devices to connect scientists and the general public, especially for small institutions with limited resources. We discuss our experiences on Reddit, a social news site, and Projeqt, a visual storytelling platform. We'll talk about the pros and cons of using them, and provide tips on what to do and what to avoid for those interested in having a go. Nearly 1.5 million people post on Reddit daily. One of the most active sections is "Ask Me Anything", where individuals can share expertise and insights. AMAs are essentially online town-hall style meetings. Movie stars host AMAs, as do politicians, athletes, and increasingly, scientists. In fact, the science subtopic is the 6th most subscribed on the site. Forecaster Tony Barnston, from the International Research Institute for Climate and Society hosted an AMA in June 2012. The session generated >200 comments and questions in 24 hours. Barnston was surprisingly pleased with the experience. "I liked having time to think about my answers," he said, noting this type of engagement could be attractive to scientists who might feel anxious about interacting with the public. Projeqt is a creative visual storytelling platform that allows one to integrate activity from a host of social media such as Twitter, Facebook, Flickr, Instagram, YouTube and more. In very short time, an institution can produce a beautiful visual narrative of its research and activities, combining its own in-house content with creative-commons content easily available on the web. The resulting product is itself shareable and embeddable.; Barnston in office where he took questions via Reddit. (Photo: B. Kahn) ; Photo essay about critical role that climate forecasting plays in helping to reduce vulnerability in Sahel. (Fiondella)

  14. Study of target and non-target interplay in spatial attention task.

    PubMed

    Sweeti; Joshi, Deepak; Panigrahi, B K; Anand, Sneh; Santhosh, Jayasree

    2018-02-01

    Selective visual attention is the ability to selectively pay attention to the targets while inhibiting the distractors. This paper aims to study the targets and non-targets interplay in spatial attention task while subject attends to the target object present in one visual hemifield and ignores the distractor present in another visual hemifield. This paper performs the averaged evoked response potential (ERP) analysis and time-frequency analysis. ERP analysis agrees to the left hemisphere superiority over late potentials for the targets present in right visual hemifield. Time-frequency analysis performed suggests two parameters i.e. event-related spectral perturbation (ERSP) and inter-trial coherence (ITC). These parameters show the same properties for the target present in either of the visual hemifields but show the difference while comparing the activity corresponding to the targets and non-targets. In this way, this study helps to visualise the difference between targets present in the left and right visual hemifields and, also the targets and non-targets present in the left and right visual hemifields. These results could be utilised to monitor subjects' performance in brain-computer interface (BCI) and neurorehabilitation.

  15. On the domain-specificity of the visual and non-visual face-selective regions.

    PubMed

    Axelrod, Vadim

    2016-08-01

    What happens in our brains when we see a face? The neural mechanisms of face processing - namely, the face-selective regions - have been extensively explored. Research has traditionally focused on visual cortex face-regions; more recently, the role of face-regions outside the visual cortex (i.e., non-visual-cortex face-regions) has been acknowledged as well. The major quest today is to reveal the functional role of each this region in face processing. To make progress in this direction, it is essential to understand the extent to which the face-regions, and particularly the non-visual-cortex face-regions, process only faces (i.e., face-specific, domain-specific processing) or rather are involved in a more domain-general cognitive processing. In the current functional MRI study, we systematically examined the activity of the whole face-network during face-unrelated reading task (i.e., written meaningful sentences with content unrelated to faces/people and non-words). We found that the non-visual-cortex (i.e., right lateral prefrontal cortex and posterior superior temporal sulcus), but not the visual cortex face-regions, responded significantly stronger to sentences than to non-words. In general, some degree of sentence selectivity was found in all non-visual-cortex cortex. Present result highlights the possibility that the processing in the non-visual-cortex face-selective regions might not be exclusively face-specific, but rather more or even fully domain-general. In this paper, we illustrate how the knowledge about domain-general processing in face-regions can help to advance our general understanding of face processing mechanisms. Our results therefore suggest that the problem of face processing should be approached in the broader scope of cognition in general. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Interactive Particle Visualization

    NASA Astrophysics Data System (ADS)

    Gribble, Christiaan P.

    Particle-based simulation methods are used to model a wide range of complex phenomena and to solve time-dependent problems of various scales. Effective visualizations of the resulting state will communicate subtle changes in the three-dimensional structure, spatial organization, and qualitative trends within a simulation as it evolves. This chapter discusses two approaches to interactive particle visualization that satisfy these goals: one targeting desktop systems equipped with programmable graphics hardware, and the other targeting moderately sized multicore systems using packet-based ray tracing.

  17. 78 FR 27867 - Airworthiness Directives; MD Helicopters Inc. Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-13

    ... the retirement life of each tail rotor blade (blade), performing a one-time visual inspection of each... information to the FAA within 24 hours following the one-time inspection. Since we issued that AD, an accident... shot peen surface's condition in addition to cracks and corrosion, and would add certain part-numbered...

  18. Time-resolved Fast Neutron Radiography of Air-water Two-phase Flows

    NASA Astrophysics Data System (ADS)

    Zboray, Robert; Dangendorf, Volker; Mor, Ilan; Tittelmeier, Kai; Bromberger, Benjamin; Prasser, Horst-Michael

    Neutron imaging, in general, is a useful technique for visualizing low-Z materials (such as water or plastics) obscured by high-Z materials. However, when significant amounts of both materials are present and full-bodied samples have to be examined, cold and thermal neutrons rapidly reach their applicability limit as the samples become opaque. In such cases one can benefit from the high penetrating power of fast neutrons. In this work we demonstrate the feasibility of time-resolved, fast neutron radiography of generic air-water two-phase flows in a 1.5 cm thick flow channel with Aluminum walls and rectangular cross section. The experiments have been carried out at the high-intensity, white-beam facility of the Physikalisch-Technische Bundesanstalt, Germany. Exposure times down to 3.33 ms have been achieved at reasonable image quality and acceptable motion artifacts. Different two-phase flow regimes such as bubbly slug and churn flows have been examined. Two-phase flow parameters like the volumetric gas fraction, bubble size and bubble velocities have been measured.

  19. Conspicuity of target lights: The influence of color

    NASA Technical Reports Server (NTRS)

    Connors, M. M.

    1975-01-01

    The conspicuity (or attention-getting qualities) were investigated of foveally-equated, colored lights, when seen against a star background. Subjects who were periodically engaged in a distracting cockpit task were required to search a large visual field and report the appearance of a target light as quickly as possible. Targets were red, yellow, white, green, and blue, and appeared either as steady or as flashing lights. Results indicate that red targets were missed more frequently and responded to more slowly than lights of other hues. Yellow targets were acquired more slowly than white, green, or blue targets; responses to white targets were significantly slower than responses to green or blue targets. In general, flashing lights were superior to steady lights, but this was not found for all hues. For red, the 2 Hz flash was superior to all other flash rates and to the steady light, none of which differed significantly from each other. Over all hues, conspicuity was found to peak at 2-3 Hz. Response time was found to be fastest, generally, for targets appearing at between 3 and 8 from the center of the visual field. However, this pattern was not repeated for every hue. Conspicuity response times suggest a complex relationship between hue and position in the visual field that is explained only partially by retinal sensitivity.

  20. Visualizing Earth and Planetary Remote Sensing Data Using JMARS

    NASA Astrophysics Data System (ADS)

    Dickenshied, S.; Christensen, P. R.; Carter, S.; Anwar, S.; Noss, D.

    2014-12-01

    JMARS (Java Mission-planning and Analysis for Remote Sensing) is a free geospatial application developed by the Mars Space Flight Facility at Arizona State University. Originally written as a mission planning tool for the THEMIS instrument on board the MARS Odyssey Spacecraft, it was released as an analysis tool to the general public in 2003. Since then it has expanded to be used for mission planning and scientific data analysis by additional NASA missions to Mars, the Moon, and Vesta, and it has come to be used by scientists, researchers and students of all ages from more than 40 countries around the world. The public version of JMARS now also includes remote sensing data for Mercury, Venus, Earth, the Moon, Mars, and a number of the moons of Jupiter and Saturn. Additional datasets for asteroids and other smaller bodies are being added as they becomes available and time permits. JMARS fuses data from different instruments in a geographical context. One core strength of JMARS is that it provides access to geospatially registered data via a consistent interface. Such data include global images (graphical and numeric), local mosaics, individual instrument images, spectra, and vector-oriented data. By hosting these products, users are able to avoid searching for, downloading, decoding, and projecting data on their own using a disparate set of tools and procedures. The JMARS team processes, indexes, and reorganizes data to make it quickly and easily accessible in a consistent manner. JMARS leverages many open-source technologies and tools to accomplish these data preparation steps. In addition to visualizing multiple datasets in context with one another, JMARS allows a user to find data products from differing missions that intersect the same geographical location, time range, or observational parameters. Any number of georegistered datasets can then be viewed or analyzed simultaneously with one another. A user can easily create a mosaic of graphic data, plot numeric data, or project any arbitrary scene over surface topography. All of these visualization options can be exported for use in presentations, publications, or for further analysis in other tools.

  1. Paternal Autistic Traits Are Predictive of Infants Visual Attention

    ERIC Educational Resources Information Center

    Ronconi, Luca; Facoetti, Andrea; Bulf, Hermann; Franchin, Laura; Bettoni, Roberta; Valenza, Eloisa

    2014-01-01

    Since subthreshold autistic social impairments aggregate in family members, and since attentional dysfunctions appear to be one of the earliest cognitive markers of children with autism, we investigated in the general population the relationship between infants' attentional functioning and the autistic traits measured in their parents.…

  2. Impairing the useful field of view in natural scenes: Tunnel vision versus general interference.

    PubMed

    Ringer, Ryan V; Throneburg, Zachary; Johnson, Aaron P; Kramer, Arthur F; Loschky, Lester C

    2016-01-01

    A fundamental issue in visual attention is the relationship between the useful field of view (UFOV), the region of visual space where information is encoded within a single fixation, and eccentricity. A common assumption is that impairing attentional resources reduces the size of the UFOV (i.e., tunnel vision). However, most research has not accounted for eccentricity-dependent changes in spatial resolution, potentially conflating fixed visual properties with flexible changes in visual attention. Williams (1988, 1989) argued that foveal loads are necessary to reduce the size of the UFOV, producing tunnel vision. Without a foveal load, it is argued that the attentional decrement is constant across the visual field (i.e., general interference). However, other research asserts that auditory working memory (WM) loads produce tunnel vision. To date, foveal versus auditory WM loads have not been compared to determine if they differentially change the size of the UFOV. In two experiments, we tested the effects of a foveal (rotated L vs. T discrimination) task and an auditory WM (N-back) task on an extrafoveal (Gabor) discrimination task. Gabor patches were scaled for size and processing time to produce equal performance across the visual field under single-task conditions, thus removing the confound of eccentricity-dependent differences in visual sensitivity. The results showed that although both foveal and auditory loads reduced Gabor orientation sensitivity, only the foveal load interacted with retinal eccentricity to produce tunnel vision, clearly demonstrating task-specific changes to the form of the UFOV. This has theoretical implications for understanding the UFOV.

  3. Refractive index and its impact on pseudophakic dysphotopsia.

    PubMed

    Radmall, Bryce R; Floyd, Anne; Oakey, Zack; Olson, Randall J

    2015-01-01

    It has been shown that the biggest dissatisfier for uncomplicated cataract surgery patients is pseudophakic dysphotopsia (PD). While edge design of an intraocular lens (IOL) impacts this problem, refractive index is still controversial as to its impact. This retrospective cohort study was designed to determine the role of increasing refractive index in PD. This study was conducted at the John A. Moran Eye Center, University of Utah, USA. A retrospective chart review identified patients who received one of two hydrophobic acrylic single piece IOLs (AcrySof WF SP [SN60WF] or Tecnis SP [ZCB00]), which differed mainly by refractive index (1.55 versus 1.47). Eighty-seven patients who had received implantation of a one-piece hydrophobic acrylic IOL were enrolled. Patients were included if the surgery had been uncomplicated and took place at least a year before study participation. All eligible patients had 20/20 best corrected vision, without any disease known to impact visual quality. In addition to conducting a record review, the enrolled patients were surveyed for PD, using a modified National Eye Institute Visual Function questionnaire, as well as for overall satisfaction with visual quality. Statistical analysis demonstrated no difference between the two cohorts regarding PD, general visual function, and overall visual satisfaction. The study suggests that with the two IOLs assessed, increasing the refractive index does not increase incidence of PD or decrease overall visual satisfaction.

  4. Literature review of visual representation of the results of benefit-risk assessments of medicinal products.

    PubMed

    Hallgreen, Christine E; Mt-Isa, Shahrul; Lieftucht, Alfons; Phillips, Lawrence D; Hughes, Diana; Talbot, Susan; Asiimwe, Alex; Downey, Gerald; Genov, Georgy; Hermann, Richard; Noel, Rebecca; Peters, Ruth; Micaleff, Alain; Tzoulaki, Ioanna; Ashby, Deborah

    2016-03-01

    The PROTECT Benefit-Risk group is dedicated to research in methods for continuous benefit-risk monitoring of medicines, including the presentation of the results, with a particular emphasis on graphical methods. A comprehensive review was performed to identify visuals used for medical risk and benefit-risk communication. The identified visual displays were grouped into visual types, and each visual type was appraised based on five criteria: intended audience, intended message, knowledge required to understand the visual, unintentional messages that may be derived from the visual and missing information that may be needed to understand the visual. Sixty-six examples of visual formats were identified from the literature and classified into 14 visual types. We found that there is not one single visual format that is consistently superior to others for the communication of benefit-risk information. In addition, we found that most of the drawbacks found in the visual formats could be considered general to visual communication, although some appear more relevant to specific formats and should be considered when creating visuals for different audiences depending on the exact message to be communicated. We have arrived at recommendations for the use of visual displays for benefit-risk communication. The recommendation refers to the creation of visuals. We outline four criteria to determine audience-visual compatibility and consider these to be a key task in creating any visual. Next we propose specific visual formats of interest, to be explored further for their ability to address nine different types of benefit-risk analysis information. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Eyes-on training and radiological expertise: an examination of expertise development and its effects on visual working memory.

    PubMed

    Beck, Melissa R; Martin, Benjamin A; Smitherman, Emily; Gaschen, Lorrie

    2013-08-01

    Our aim was to examine the specificity of the effects of acquiring expertise on visual working memory (VWM) and the degree to which higher levels of experience within the domain of expertise are associated with more efficient use of VWM. Previous research is inconsistent on whether expertise effects are specific to the area of expertise or generalize to other tasks that also involve the same cognitive processes. It is also unclear whether more training and/or experience will lead to continued improvement on domain-relevant tasks or whether a plateau could be reached. In Experiment I, veterinary medicine students completed a one-shot visual change detection task. In Experiment 2, veterinarians completed a flicker change detection task. Both experiments involved stimuli specific to the domain of radiology and general stimuli. In Experiment I, veterinary medicine students who had completed an "eyes-on" radiological training demonstrated a domain-specific effect in which performance was better on the domain-specific stimuli than on the domain-general stimuli. In Experiment 2, veterinarians again showed a domain-specific effect, but performance was unrelated to the amount of experience veterinarians had accumulated. The effect of experience is domain specific and occurs during the first few years of training, after which a plateau is reached. VWM training in one domain may not lead to improved performance on other VWM tasks. In acquiring expertise, eyes-on training is important initially, but continued experience may not be associated with further improvements in the efficiency of VWM.

  6. Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule.

    PubMed

    Beyeler, Michael; Dutt, Nikil D; Krichmar, Jeffrey L

    2013-12-01

    Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Understanding How to Build Long-Lived Learning Collaborators

    DTIC Science & Technology

    2016-03-16

    discrimination in learning, and dynamic encoding strategies to improve visual encoding for learning via analogical generalization. We showed that spatial concepts...a 20,000 sketch corpus to examine the tradeoffs involved in visual representation and analogical generalization. 15. SUBJECT TERMS...strategies to improve visual encoding for learning via analogical generalization. We showed that spatial concepts can be learned via analogical

  8. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  9. Causal structures in inflation

    NASA Astrophysics Data System (ADS)

    Ellis, George F. R.; Uzan, Jean-Philippe

    2015-12-01

    This article reviews the properties and limitations associated with the existence of particle, visual, and event horizons in cosmology in general and in inflationary universes in particular, carefully distinguishing them from 'Hubble horizons'. It explores to what extent one might be able to probe conditions beyond the visual horizon (which is close in size to the present Hubble radius), thereby showing that visual horizons place major limits on what are observationally testable aspects of a multiverse, if such exists. Indeed these limits largely prevent us from observationally proving a multiverse either does or does not exist. We emphasize that event horizons play no role at all in observational cosmology, even in the multiverse context, despite some claims to the contrary in the literature. xml:lang="fr"

  10. When do I quit? The search termination problem in visual search.

    PubMed

    Wolfe, Jeremy M

    2012-01-01

    In visual search tasks, observers look for targets in displays or scenes containing distracting, non-target items. Most of the research on this topic has concerned the finding of those targets. Search termination is a less thoroughly studied topic. When is it time to abandon the current search? The answer is fairly straight forward when the one and only target has been found (There are my keys.). The problem is more vexed if nothing has been found (When is it time to stop looking for a weapon at the airport checkpoint?) or when the number of targets is unknown (Have we found all the tumors?). This chapter reviews the development of ideas about quitting time in visual search and offers an outline of our current theory.

  11. Attended but unseen: visual attention is not sufficient for visual awareness.

    PubMed

    Kentridge, R W; Nijboer, T C W; Heywood, C A

    2008-02-12

    Does any one psychological process give rise to visual awareness? One candidate is selective attention-when we attend to something it seems we always see it. But if attention can selectively enhance our response to an unseen stimulus then attention cannot be a sufficient precondition for awareness. Kentridge, Heywood & Weiskrantz [Kentridge, R. W., Heywood, C. A., & Weiskrantz, L. (1999). Attention without awareness in blindsight. Proceedings of the Royal Society of London, Series B, 266, 1805-1811; Kentridge, R. W., Heywood, C. A., & Weiskrantz, L. (2004). Spatial attention speeds discrimination without awareness in blindsight. Neuropsychologia, 42, 831-835.] demonstrated just such a dissociation in the blindsight subject GY. Here, we test whether the dissociation generalizes to the normal population. We presented observers with pairs of coloured discs, each masked by the subsequent presentation of a coloured annulus. The discs acted as primes, speeding discrimination of the colour of the annulus when they matched in colour and slowing it when they differed. We show that the location of attention modulated the size of this priming effect. However, the primes were rendered invisible by metacontrast-masking and remained unseen despite being attended. Visual attention could therefore facilitate processing of an invisible target and cannot, therefore, be a sufficient precondition for visual awareness.

  12. Large Terrain Modeling and Visualization for Planets

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher

    2011-01-01

    Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.

  13. [Effect of spatial location on the generality of block-wise conflict adaptation between different types of scripts].

    PubMed

    Watanabe, Yurina; Yoshizaki, Kazuhito

    2014-10-01

    This study aimed to investigate the generality of conflict adaptation associated with block-wise conflict frequency between two types of stimulus scripts (Kanji and Hiragana). To this end, we examined whether the modulation of the compatibility effect with one type of script depending on block-wise conflict frequency (75% versus 25% generalized to the other type of script whose block-wise conflict frequency was kept constant (50%), using the Spatial Stroop task. In Experiment 1, 16 participants were required to identify the target orientation (up or down) presented in the upper or lower visual-field. The results showed that block-wise conflict adaptation with one type of stimulus script generalized to the other. The procedure in Experiment 2 was the same as that in Experiment 1, except that the presentation location differed between the two types of stimulus scripts. We did not find a generalization from one script to the other. These results suggest that presentation location is a critical factor contributing to the generality of block-wise conflict adaptation.

  14. [Effects of infrasound on visual electrophysiology in mice].

    PubMed

    Shi, Li; Zhang, Zuo-ming; Chen, Jing-zao; Liu, Jing

    2003-04-01

    To investigate the possible effects of infrasound on visual functions. One hundred and fifty mature male Kunming-mice were divided into 5 groups, in which one was control and the other four were exposed to infrasound of 8 Hz, 90 dB; 8 Hz, 130 dB; 16 Hz, 90 dB and 16 Hz, 130 dB 2 h/d respectively. The exposure time for them were 0, 1, 4, 7, 14 and 21 d respectively, each group was divided into 6 sub-groups. Electroretinogram (ERG), oscillatory potentials (OPs), and visual evoked potential (VEP) were recorded after exposure. The visual electrophysiological indices after 8 Hz, 90 dB and 16 Hz, 90 dB exposures were similar except for a little difference at some temporal points (P<0.05). Most of the indices in 8 Hz, 130 dB group changed after 7 d exposure, and the longer the exposure, the more obvious changes were observed (P<0.01). The indices in 16 Hz, 130 dB group changed obviously after 1 d and reversed with increase of exposure time (P<0.01). The effect of infrasound on visual functions are related to its frequency and intensity. Infrasound of different frequencies causes different levels of retinal resonance, which leads to different degrees of cellular lesion and produces different electrical potentials.

  15. Prenatal sensory experience affects hatching behavior in domestic chicks (Gallus gallus) and Japanese quail chicks (Coturnix coturnix japonica).

    PubMed

    Sleigh, Merry J; Casey, Michael B

    2014-07-01

    Species-typical developmental outcomes result from organismic and environmental constraints and experiences shared by members of a species. We examined the effects of enhanced prenatal sensory experience on hatching behaviors by exposing domestic chicks (n = 95) and Japanese quail (n = 125) to one of four prenatal conditions: enhanced visual stimulation, enhanced auditory stimulation, enhanced auditory and visual stimulation, or no enhanced sensory experience (control condition). In general, across species, control embryos had slower hatching behaviors than all other embryos. Embryos in the auditory condition had faster hatching behaviors than embryos in the visual and control conditions. Auditory-visual condition embryos showed similarities to embryos exposed to either auditory or visual stimulation. These results suggest that prenatal sensory experience can influence hatching behavior of precocial birds, with the type of stimulation being a critical variable. These results also provide further evidence that species-typical outcomes are the result of species-typical prenatal experiences. © 2013 Wiley Periodicals, Inc.

  16. Camouflage, communication and thermoregulation: lessons from colour changing organisms.

    PubMed

    Stuart-Fox, Devi; Moussalli, Adnan

    2009-02-27

    Organisms capable of rapid physiological colour change have become model taxa in the study of camouflage because they are able to respond dynamically to the changes in their visual environment. Here, we briefly review the ways in which studies of colour changing organisms have contributed to our understanding of camouflage and highlight some unique opportunities they present. First, from a proximate perspective, comparison of visual cues triggering camouflage responses and the visual perception mechanisms involved can provide insight into general visual processing rules. Second, colour changing animals can potentially tailor their camouflage response not only to different backgrounds but also to multiple predators with different visual capabilities. We present new data showing that such facultative crypsis may be widespread in at least one group, the dwarf chameleons. From an ultimate perspective, we argue that colour changing organisms are ideally suited to experimental and comparative studies of evolutionary interactions between the three primary functions of animal colour patterns: camouflage; communication; and thermoregulation.

  17. Camouflage, communication and thermoregulation: lessons from colour changing organisms

    PubMed Central

    Stuart-Fox, Devi; Moussalli, Adnan

    2008-01-01

    Organisms capable of rapid physiological colour change have become model taxa in the study of camouflage because they are able to respond dynamically to the changes in their visual environment. Here, we briefly review the ways in which studies of colour changing organisms have contributed to our understanding of camouflage and highlight some unique opportunities they present. First, from a proximate perspective, comparison of visual cues triggering camouflage responses and the visual perception mechanisms involved can provide insight into general visual processing rules. Second, colour changing animals can potentially tailor their camouflage response not only to different backgrounds but also to multiple predators with different visual capabilities. We present new data showing that such facultative crypsis may be widespread in at least one group, the dwarf chameleons. From an ultimate perspective, we argue that colour changing organisms are ideally suited to experimental and comparative studies of evolutionary interactions between the three primary functions of animal colour patterns: camouflage; communication; and thermoregulation. PMID:19000973

  18. The effects of two types of sleep deprivation on visual working memory capacity and filtering efficiency.

    PubMed

    Drummond, Sean P A; Anderson, Dane E; Straus, Laura D; Vogel, Edward K; Perez, Veronica B

    2012-01-01

    Sleep deprivation has adverse consequences for a variety of cognitive functions. The exact effects of sleep deprivation, though, are dependent upon the cognitive process examined. Within working memory, for example, some component processes are more vulnerable to sleep deprivation than others. Additionally, the differential impacts on cognition of different types of sleep deprivation have not been well studied. The aim of this study was to examine the effects of one night of total sleep deprivation and 4 nights of partial sleep deprivation (4 hours in bed/night) on two components of visual working memory: capacity and filtering efficiency. Forty-four healthy young adults were randomly assigned to one of the two sleep deprivation conditions. All participants were studied: 1) in a well-rested condition (following 6 nights of 9 hours in bed/night); and 2) following sleep deprivation, in a counter-balanced order. Visual working memory testing consisted of two related tasks. The first measured visual working memory capacity and the second measured the ability to ignore distractor stimuli in a visual scene (filtering efficiency). Results showed neither type of sleep deprivation reduced visual working memory capacity. Partial sleep deprivation also generally did not change filtering efficiency. Total sleep deprivation, on the other hand, did impair performance in the filtering task. These results suggest components of visual working memory are differentially vulnerable to the effects of sleep deprivation, and different types of sleep deprivation impact visual working memory to different degrees. Such findings have implications for operational settings where individuals may need to perform with inadequate sleep and whose jobs involve receiving an array of visual information and discriminating the relevant from the irrelevant prior to making decisions or taking actions (e.g., baggage screeners, air traffic controllers, military personnel, health care providers).

  19. Unintentional Interpersonal Synchronization Represented as a Reciprocal Visuo-Postural Feedback System: A Multivariate Autoregressive Modeling Approach

    PubMed Central

    Okazaki, Shuntaro; Hirotani, Masako; Koike, Takahiko; Bosch-Bayard, Jorge; Takahashi, Haruka K.; Hashiguchi, Maho; Sadato, Norihiro

    2015-01-01

    People’s behaviors synchronize. It is difficult, however, to determine whether synchronized behaviors occur in a mutual direction—two individuals influencing one another—or in one direction—one individual leading the other, and what the underlying mechanism for synchronization is. To answer these questions, we hypothesized a non-leader-follower postural sway synchronization, caused by a reciprocal visuo-postural feedback system operating on pairs of individuals, and tested that hypothesis both experimentally and via simulation. In the behavioral experiment, 22 participant pairs stood face to face either 20 or 70 cm away from each other wearing glasses with or without vision blocking lenses. The existence and direction of visual information exchanged between pairs of participants were systematically manipulated. The time series data for the postural sway of these pairs were recorded and analyzed with cross correlation and causality. Results of cross correlation showed that postural sway of paired participants was synchronized, with a shorter time lag when participant pairs could see one another’s head motion than when one of the participants was blindfolded. In addition, there was less of a time lag in the observed synchronization when the distance between participant pairs was smaller. As for the causality analysis, noise contribution ratio (NCR), the measure of influence using a multivariate autoregressive model, was also computed to identify the degree to which one’s postural sway is explained by that of the other’s and how visual information (sighted vs. blindfolded) interacts with paired participants’ postural sway. It was found that for synchronization to take place, it is crucial that paired participants be sighted and exert equal influence on one another by simultaneously exchanging visual information. Furthermore, a simulation for the proposed system with a wider range of visual input showed a pattern of results similar to the behavioral results. PMID:26398768

  20. Reduced Change Blindness Suggests Enhanced Attention to Detail in Individuals with Autism

    ERIC Educational Resources Information Center

    Smith, Hayley; Milne, Elizabeth

    2009-01-01

    Background: The phenomenon of change blindness illustrates that a limited number of items within the visual scene are attended to at any one time. It has been suggested that individuals with autism focus attention on less contextually relevant aspects of the visual scene, show superior perceptual discrimination and notice details which are often…

  1. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    ERIC Educational Resources Information Center

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

  2. Revisioning Premodern Fine Art as Popular Visual Culture

    ERIC Educational Resources Information Center

    Duncum, Paul

    2014-01-01

    Employing the concept of a rhetoric of emotions, European Premodern fine art is revisioned as popular culture. From ancient times, the rhetoric of emotion was one of the principle concepts informing the theory and practice of all forms of European cultural production, including the visual arts, until it was gradually displaced during the 1700s and…

  3. Using Marketing Visuals for Product Talk in Business English Classes

    ERIC Educational Resources Information Center

    Adamson, John

    2005-01-01

    One requirement often stressed by the author's Japanese business English students in sales and marketing positions is the need to talk about the product, or make presentations, in terms of its market growth and market share over time with the use of a visual representation. These requests have linguistic and conceptual elements that demand a lot…

  4. Hydrograph matching method for measuring model performance

    NASA Astrophysics Data System (ADS)

    Ewen, John

    2011-09-01

    SummaryDespite all the progress made over the years on developing automatic methods for analysing hydrographs and measuring the performance of rainfall-runoff models, automatic methods cannot yet match the power and flexibility of the human eye and brain. Very simple approaches are therefore being developed that mimic the way hydrologists inspect and interpret hydrographs, including the way that patterns are recognised, links are made by eye, and hydrological responses and errors are studied and remembered. In this paper, a dynamic programming algorithm originally designed for use in data mining is customised for use with hydrographs. It generates sets of "rays" that are analogous to the visual links made by the hydrologist's eye when linking features or times in one hydrograph to the corresponding features or times in another hydrograph. One outcome from this work is a new family of performance measures called "visual" performance measures. These can measure differences in amplitude and timing, including the timing errors between simulated and observed hydrographs in model calibration. To demonstrate this, two visual performance measures, one based on the Nash-Sutcliffe Efficiency and the other on the mean absolute error, are used in a total of 34 split-sample calibration-validation tests for two rainfall-runoff models applied to the Hodder catchment, northwest England. The customised algorithm, called the Hydrograph Matching Algorithm, is very simple to apply; it is given in a few lines of pseudocode.

  5. Classification of fMRI independent components using IC-fingerprints and support vector machine classifiers.

    PubMed

    De Martino, Federico; Gentile, Francesco; Esposito, Fabrizio; Balsi, Marco; Di Salle, Francesco; Goebel, Rainer; Formisano, Elia

    2007-01-01

    We present a general method for the classification of independent components (ICs) extracted from functional MRI (fMRI) data sets. The method consists of two steps. In the first step, each fMRI-IC is associated with an IC-fingerprint, i.e., a representation of the component in a multidimensional space of parameters. These parameters are post hoc estimates of global properties of the ICs and are largely independent of a specific experimental design and stimulus timing. In the second step a machine learning algorithm automatically separates the IC-fingerprints into six general classes after preliminary training performed on a small subset of expert-labeled components. We illustrate this approach in a multisubject fMRI study employing visual structure-from-motion stimuli encoding faces and control random shapes. We show that: (1) IC-fingerprints are a valuable tool for the inspection, characterization and selection of fMRI-ICs and (2) automatic classifications of fMRI-ICs in new subjects present a high correspondence with those obtained by expert visual inspection of the components. Importantly, our classification procedure highlights several neurophysiologically interesting processes. The most intriguing of which is reflected, with high intra- and inter-subject reproducibility, in one IC exhibiting a transiently task-related activation in the 'face' region of the primary sensorimotor cortex. This suggests that in addition to or as part of the mirror system, somatotopic regions of the sensorimotor cortex are involved in disambiguating the perception of a moving body part. Finally, we show that the same classification algorithm can be successfully applied, without re-training, to fMRI collected using acquisition parameters, stimulation modality and timing considerably different from those used for training.

  6. Visual one-shot learning as an 'anti-camouflage device': a novel morphing paradigm.

    PubMed

    Ishikawa, Tetsuo; Mogi, Ken

    2011-09-01

    Once people perceive what is in the hidden figure such as Dallenbach's cow and Dalmatian, they seldom seem to come back to the previous state when they were ignorant of the answer. This special type of learning process can be accomplished in a short time, with the effect of learning lasting for a long time (visual one-shot learning). Although it is an intriguing cognitive phenomenon, the lack of the control of difficulty of stimuli presented has been a problem in research. Here we propose a novel paradigm to create new hidden figures systematically by using a morphing technique. Through gradual changes from a blurred and binarized two-tone image to a blurred grayscale image of the original photograph including objects in a natural scene, spontaneous one-shot learning can occur at a certain stage of morphing when a sufficient amount of information is restored to the degraded image. A negative correlation between confidence levels and reaction times is observed, giving support to the fluency theory of one-shot learning. The correlation between confidence ratings and correct recognition rates indicates that participants had an accurate introspective ability (metacognition). The learning effect could be tested later by verifying whether or not the target object was recognized quicker in the second exposure. The present method opens a way for a systematic production of "good" hidden figures, which can be used to demystify the nature of visual one-shot learning.

  7. 29 CFR 1910.101 - Compressed gases (general requirements).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (general requirements). (a) Inspection of compressed gas cylinders. Each employer shall determine that... by visual inspection. Visual and other inspections shall be conducted as prescribed in the Hazardous... those regulations are not applicable, visual and other inspections shall be conducted in accordance with...

  8. 29 CFR 1910.101 - Compressed gases (general requirements).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (general requirements). (a) Inspection of compressed gas cylinders. Each employer shall determine that... by visual inspection. Visual and other inspections shall be conducted as prescribed in the Hazardous... those regulations are not applicable, visual and other inspections shall be conducted in accordance with...

  9. 29 CFR 1910.101 - Compressed gases (general requirements).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (general requirements). (a) Inspection of compressed gas cylinders. Each employer shall determine that... by visual inspection. Visual and other inspections shall be conducted as prescribed in the Hazardous... those regulations are not applicable, visual and other inspections shall be conducted in accordance with...

  10. 29 CFR 1910.101 - Compressed gases (general requirements).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (general requirements). (a) Inspection of compressed gas cylinders. Each employer shall determine that... by visual inspection. Visual and other inspections shall be conducted as prescribed in the Hazardous... those regulations are not applicable, visual and other inspections shall be conducted in accordance with...

  11. 29 CFR 1910.101 - Compressed gases (general requirements).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (general requirements). (a) Inspection of compressed gas cylinders. Each employer shall determine that... by visual inspection. Visual and other inspections shall be conducted as prescribed in the Hazardous... those regulations are not applicable, visual and other inspections shall be conducted in accordance with...

  12. A new method for laminar boundary layer transition visualization in flight: Color changes in liquid crystal coatings

    NASA Technical Reports Server (NTRS)

    Holmes, B. J.; Gall, P. D.; Croom, C. C.; Manuel, G. S.; Kelliher, W. C.

    1986-01-01

    The visualization of laminar to turbulent boundary layer transition plays an important role in flight and wind-tunnel aerodynamic testing of aircraft wing and body surfaces. Visualization can help provide a more complete understanding of both transition location as well as transition modes; without visualization, the transition process can be very difficult to understand. In the past, the most valuable transition visualization methods for flight applications included sublimating chemicals and oil flows. Each method has advantages and limitations. In particular, sublimating chemicals are impractical to use in subsonic applications much above 20,000 feet because of the greatly reduced rates of sublimation at lower temperatures (less than -4 degrees Farenheit). Both oil flow and sublimating chemicals have the disadvantage of providing only one good data point per flight. Thus, for many important flight conditions, transition visualization has not been readily available. This paper discusses a new method for visualizing transition in flight by the use of liquid crystals. The new method overcomes the limitations of past techniques, and provides transition visualization capability throughout almost the entire altitude and speed ranges of virtually all subsonic aircraft flight envelopes. The method also has wide applicability for supersonic transition visualization in flight and for general use in wind tunnel research over wide subsonic and supersonic speed ranges.

  13. Iterating between Tools to Create and Edit Visualizations.

    PubMed

    Bigelow, Alex; Drucker, Steven; Fisher, Danyel; Meyer, Miriah

    2017-01-01

    A common workflow for visualization designers begins with a generative tool, like D3 or Processing, to create the initial visualization; and proceeds to a drawing tool, like Adobe Illustrator or Inkscape, for editing and cleaning. Unfortunately, this is typically a one-way process: once a visualization is exported from the generative tool into a drawing tool, it is difficult to make further, data-driven changes. In this paper, we propose a bridge model to allow designers to bring their work back from the drawing tool to re-edit in the generative tool. Our key insight is to recast this iteration challenge as a merge problem - similar to when two people are editing a document and changes between them need to reconciled. We also present a specific instantiation of this model, a tool called Hanpuku, which bridges between D3 scripts and Illustrator. We show several examples of visualizations that are iteratively created using Hanpuku in order to illustrate the flexibility of the approach. We further describe several hypothetical tools that bridge between other visualization tools to emphasize the generality of the model.

  14. Probing feedforward and feedback contributions to awareness with visual masking and transcranial magnetic stimulation.

    PubMed

    Tapia, Evelina; Beck, Diane M

    2014-01-01

    A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness.

  15. Reliability of a computer-based system for measuring visual performance skills.

    PubMed

    Erickson, Graham B; Citek, Karl; Cove, Michelle; Wilczek, Jennifer; Linster, Carolyn; Bjarnason, Brendon; Langemo, Nathan

    2011-09-01

    Athletes have demonstrated better visual abilities than nonathletes. A vision assessment for an athlete should include methods to evaluate the quality of visual performance skills in the most appropriate, accurate, and repeatable manner. This study determines the reliability of the visual performance measures assessed with a computer-based system, known as the Nike Sensory Station. One hundred twenty-five subjects (56 men, 69 women), age 18 to 30, completed Phase I of the study. Subjects attended 2 sessions, separated by at least 1 week, in which identical protocols were followed. Subjects completed the following assessments: Visual Clarity, Contrast Sensitivity, Depth Perception, Near-Far Quickness, Target Capture, Perception Span, Eye-Hand Coordination, Go/No Go, and Reaction Time. An additional 36 subjects (20 men, 16 women), age 22 to 35, completed Phase II of the study involving modifications to the equipment, instructions, and protocols from Phase I. Results show no significant change in performance over time on assessments of Visual Clarity, Contrast Sensitivity, Depth Perception, Target Capture, Perception Span, and Reaction Time. Performance did improve over time for Near-Far Quickness, Eye-Hand Coordination, and Go/No Go. The results of this study show that many of the Nike Sensory Station assessments show repeatability and no learning effect over time. The measures that did improve across sessions show an expected learning effect caused by the motor response characteristics being measured. Copyright © 2011 American Optometric Association. Published by Elsevier Inc. All rights reserved.

  16. Comparative case study between D3 and highcharts on lustre data visualization

    NASA Astrophysics Data System (ADS)

    ElTayeby, Omar; John, Dwayne; Patel, Pragnesh; Simmerman, Scott

    2013-12-01

    One of the challenging tasks in visual analytics is to target clustered time-series data sets, since it is important for data analysts to discover patterns changing over time while keeping their focus on particular subsets. In order to leverage the humans ability to quickly visually perceive these patterns, multivariate features should be implemented according to the attributes available. However, a comparative case study has been done using JavaScript libraries to demonstrate the differences in capabilities of using them. A web-based application to monitor the Lustre file system for the systems administrators and the operation teams has been developed using D3 and Highcharts. Lustre file systems are responsible of managing Remote Procedure Calls (RPCs) which include input output (I/O) requests between clients and Object Storage Targets (OSTs). The objective of this application is to provide time-series visuals of these calls and storage patterns of users on Kraken, a University of Tennessee High Performance Computing (HPC) resource in Oak Ridge National Laboratory (ORNL).

  17. Temporal ventriloquism: crossmodal interaction on the time dimension. 1. Evidence from auditory-visual temporal order judgment.

    PubMed

    Bertelson, Paul; Aschersleben, Gisa

    2003-10-01

    In the well-known visual bias of auditory location (alias the ventriloquist effect), auditory and visual events presented in separate locations appear closer together, provided the presentations are synchronized. Here, we consider the possibility of the converse phenomenon: crossmodal attraction on the time dimension conditional on spatial proximity. Participants judged the order of occurrence of sound bursts and light flashes, respectively, separated in time by varying stimulus onset asynchronies (SOAs) and delivered either in the same or in different locations. Presentation was organized using randomly mixed psychophysical staircases, by which the SOA was reduced progressively until a point of uncertainty was reached. This point was reached at longer SOAs with the sounds in the same frontal location as the flashes than in different places, showing that apparent temporal separation is effectively longer in the first condition. Together with a similar one obtained recently in a case of tactile-visual discrepancy, this result supports a view in which timing and spatial layout of the inputs play to some extent inter-changeable roles in the pairing operation at the base of crossmodal interaction.

  18. Deconstruction of spatial integrity in visual stimulus detected by modulation of synchronized activity in cat visual cortex.

    PubMed

    Zhou, Zhiyi; Bernard, Melanie R; Bonds, A B

    2008-04-02

    Spatiotemporal relationships among contour segments can influence synchronization of neural responses in the primary visual cortex. We performed a systematic study to dissociate the impact of spatial and temporal factors in the signaling of contour integration via synchrony. In addition, we characterized the temporal evolution of this process to clarify potential underlying mechanisms. With a 10 x 10 microelectrode array, we recorded the simultaneous activity of multiple cells in the cat primary visual cortex while stimulating with drifting sine-wave gratings. We preserved temporal integrity and systematically degraded spatial integrity of the sine-wave gratings by adding spatial noise. Neural synchronization was analyzed in the time and frequency domains by conducting cross-correlation and coherence analyses. The general association between neural spike trains depends strongly on spatial integrity, with coherence in the gamma band (35-70 Hz) showing greater sensitivity to the change of spatial structure than other frequency bands. Analysis of the temporal dynamics of synchronization in both time and frequency domains suggests that spike timing synchronization is triggered nearly instantaneously by coherent structure in the stimuli, whereas frequency-specific oscillatory components develop more slowly, presumably through network interactions. Our results suggest that, whereas temporal integrity is required for the generation of synchrony, spatial integrity is critical in triggering subsequent gamma band synchronization.

  19. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  20. Eye guidance during real-world scene search: The role color plays in central and peripheral vision.

    PubMed

    Nuthmann, Antje; Malcolm, George L

    2016-01-01

    The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.

  1. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses.

    PubMed

    Fink, Wolfgang; You, Cindy X; Tarbell, Mark A

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (microAVS(2)) for real-time image processing. Truly standalone, microAVS(2) is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on microAVS(2) operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. MiccroAVS(2) imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, microAVS(2) affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, microAVS(2) can easily be reconfigured for other prosthetic systems. Testing of microAVS(2) with actual retinal implant carriers is envisioned in the near future.

  2. Zif268 mRNA Expression Patterns Reveal a Distinct Impact of Early Pattern Vision Deprivation on the Development of Primary Visual Cortical Areas in the Cat.

    PubMed

    Laskowska-Macios, Karolina; Zapasnik, Monika; Hu, Tjing-Tjing; Kossut, Malgorzata; Arckens, Lutgarde; Burnat, Kalina

    2015-10-01

    Pattern vision deprivation (BD) can induce permanent deficits in global motion perception. The impact of timing and duration of BD on the maturation of the central and peripheral visual field representations in cat primary visual areas 17 and 18 remains unknown. We compared early BD, from eye opening for 2, 4, or 6 months, with late onset BD, after 2 months of normal vision, using the expression pattern of the visually driven activity reporter gene zif268 as readout. Decreasing zif268 mRNA levels between months 2 and 4 characterized the normal maturation of the (supra)granular layers of the central and peripheral visual field representations in areas 17 and 18. In general, all BD conditions had higher than normal zif268 levels. In area 17, early BD induced a delayed decrease, beginning later in peripheral than in central area 17. In contrast, the decrease occurred between months 2 and 4 throughout area 18. Lack of pattern vision stimulation during the first 4 months of life therefore has a different impact on the development of areas 17 and 18. A high zif268 expression level at a time when normal vision is restored seems to predict the capacity of a visual area to compensate for BD. © The Author 2014. Published by Oxford University Press.

  3. Development of subliminal persuasion system to improve the upper limb posture in laparoscopic training: a preliminary study.

    PubMed

    Zhang, Di; Sessa, Salvatore; Kong, Weisheng; Cosentino, Sarah; Magistro, Daniele; Ishii, Hiroyuki; Zecca, Massimiliano; Takanishi, Atsuo

    2015-11-01

    Current training for laparoscopy focuses only on the enhancement of manual skill and does not give advice on improving trainees' posture. However, a poor posture can result in increased static muscle loading, faster fatigue, and impaired psychomotor task performance. In this paper, the authors propose a method, named subliminal persuasion, which gives the trainee real-time advice for correcting the upper limb posture during laparoscopic training like the expert but leads to a lower increment in the workload. A 9-axis inertial measurement unit was used to compute the upper limb posture, and a Detection Reaction Time device was developed and used to measure the workload. A monitor displayed not only images from laparoscope, but also a visual stimulus, a transparent red cross superimposed to the laparoscopic images, when the trainee had incorrect upper limb posture. One group was exposed, when their posture was not correct during training, to a short (about 33 ms) subliminal visual stimulus. The control group instead was exposed to longer (about 660 ms) supraliminal visual stimuli. We found that subliminal visual stimulation is a valid method to improve trainees' upper limb posture during laparoscopic training. Moreover, the additional workload required for subconscious processing of subliminal visual stimuli is less than the one required for supraliminal visual stimuli, which is processed instead at the conscious level. We propose subliminal persuasion as a method to give subconscious real-time stimuli to improve upper limb posture during laparoscopic training. Its effectiveness and efficiency were confirmed against supraliminal stimuli transmitted at the conscious level: Subliminal persuasion improved upper limb posture of trainees, with a smaller increase on the overall workload.

  4. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    PubMed

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  5. Age slowing down in detection and visual discrimination under varying presentation times.

    PubMed

    Moret-Tatay, Carmen; Lemus-Zúñiga, Lenin-Guillermo; Tortosa, Diana Abad; Gamermann, Daniel; Vázquez-Martínez, Andrea; Navarro-Pardo, Esperanza; Conejero, J Alberto

    2017-08-01

    The reaction time has been described as a measure of perception, decision making, and other cognitive processes. The aim of this work is to examine age-related changes in executive functions in terms of demand load under varying presentation times. Two tasks were employed where a signal detection and a discrimination task were performed by young and older university students. Furthermore, a characterization of the response time distribution by an ex-Gaussian fit was carried out. The results indicated that the older participants were slower than the younger ones in signal detection and discrimination. Moreover, the differences between both processes for the older participants were higher, and they also showed a higher distribution average except for the lower and higher presentation time. The results suggest a general slowdown in both tasks for age under different presentation times, except for the cases where presentation times were lower and higher. Moreover, if these parameters are understood to be a reflection of executive functions, these findings are consistent with the common view that age-related cognitive deficits show a decline in this function. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  6. The Limits of Shape Recognition following Late Emergence from Blindness.

    PubMed

    McKyton, Ayelet; Ben-Zion, Itay; Doron, Ravid; Zohary, Ehud

    2015-09-21

    Visual object recognition develops during the first years of life. But what if one is deprived of vision during early post-natal development? Shape information is extracted using both low-level cues (e.g., intensity- or color-based contours) and more complex algorithms that are largely based on inference assumptions (e.g., illumination is from above, objects are often partially occluded). Previous studies, testing visual acuity using a 2D shape-identification task (Lea symbols), indicate that contour-based shape recognition can improve with visual experience, even after years of visual deprivation from birth. We hypothesized that this may generalize to other low-level cues (shape, size, and color), but not to mid-level functions (e.g., 3D shape from shading) that might require prior visual knowledge. To that end, we studied a unique group of subjects in Ethiopia that suffered from an early manifestation of dense bilateral cataracts and were surgically treated only years later. Our results suggest that the newly sighted rapidly acquire the ability to recognize an odd element within an array, on the basis of color, size, or shape differences. However, they are generally unable to find the odd shape on the basis of illusory contours, shading, or occlusion relationships. Little recovery of these mid-level functions is seen within 1 year post-operation. We find that visual performance using low-level cues is relatively robust to prolonged deprivation from birth. However, the use of pictorial depth cues to infer 3D structure from the 2D retinal image is highly susceptible to early and prolonged visual deprivation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.

    PubMed

    Byers, Anna; Serences, John T

    2014-09-01

    Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.

  8. Alteration of the microsaccadic velocity-amplitude main sequence relationship after visual transients: implications for models of saccade control

    PubMed Central

    Chen, Chih-Yang; Tian, Xiaoguang; Idrees, Saad; Münch, Thomas A.

    2017-01-01

    Microsaccades occur during gaze fixation to correct for miniscule foveal motor errors. The mechanisms governing such fine oculomotor control are still not fully understood. In this study, we explored microsaccade control by analyzing the impacts of transient visual stimuli on these movements’ kinematics. We found that such kinematics can be altered in systematic ways depending on the timing and spatial geometry of visual transients relative to the movement goals. In two male rhesus macaques, we presented peripheral or foveal visual transients during an otherwise stable period of fixation. Such transients resulted in well-known reductions in microsaccade frequency, and our goal was to investigate whether microsaccade kinematics would additionally be altered. We found that both microsaccade timing and amplitude were modulated by the visual transients, and in predictable manners by these transients’ timing and geometry. Interestingly, modulations in the peak velocity of the same movements were not proportional to the observed amplitude modulations, suggesting a violation of the well-known “main sequence” relationship between microsaccade amplitude and peak velocity. We hypothesize that visual stimulation during movement preparation affects not only the saccadic “Go” system driving eye movements but also a “Pause” system inhibiting them. If the Pause system happens to be already turned off despite the new visual input, movement kinematics can be altered by the readout of additional visually evoked spikes in the Go system coding for the flash location. Our results demonstrate precise control over individual microscopic saccades and provide testable hypotheses for mechanisms of saccade control in general. NEW & NOTEWORTHY Microsaccadic eye movements play an important role in several aspects of visual perception and cognition. However, the mechanisms for microsaccade control are still not fully understood. We found that microsaccade kinematics can be altered in a systematic manner by visual transients, revealing a previously unappreciated and exquisite level of control by the oculomotor system of even the smallest saccades. Our results suggest precise temporal interaction between visual, motor, and inhibitory signals in microsaccade control. PMID:28202573

  9. GPU-Based Interactive Exploration and Online Probability Maps Calculation for Visualizing Assimilated Ocean Ensembles Data

    NASA Astrophysics Data System (ADS)

    Hoteit, I.; Hollt, T.; Hadwiger, M.; Knio, O. M.; Gopalakrishnan, G.; Zhan, P.

    2016-02-01

    Ocean reanalyses and forecasts are nowadays generated by combining ensemble simulations with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. We present an approach using probability-weighted piecewise particle trajectories to allow for interactive probability mapping. This is achieved by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next cycle. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates. The technique is integrated in an interactive visualization system that enables the visual analysis of the particle traces side by side with other forecast variables, such as the sea surface height, and their corresponding behavior over time. By harnessing the power of modern graphics processing units (GPUs) for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real-time, view specific parameter settings or simulation models and move between different spatial or temporal regions without delay. In addition our system provides advanced visualizations to highlight the uncertainty, or show the complete distribution of the simulations at user-defined positions over the complete time series of the domain.

  10. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.

  11. Explanatory and illustrative visualization of special and general relativity.

    PubMed

    Weiskopf, Daniel; Borchers, Marc; Ertl, Thomas; Falk, Martin; Fechtig, Oliver; Frank, Regine; Grave, Frank; King, Andreas; Kraus, Ute; Müller, Thomas; Nollert, Hans-Peter; Rica Mendez, Isabel; Ruder, Hanns; Schafhitzel, Tobias; Schär, Sonja; Zahn, Corvin; Zatloukal, Michael

    2006-01-01

    This paper describes methods for explanatory and illustrative visualizations used to communicate aspects of Einstein's theories of special and general relativity, their geometric structure, and of the related fields of cosmology and astrophysics. Our illustrations target a general audience of laypersons interested in relativity. We discuss visualization strategies, motivated by physics education and the didactics of mathematics, and describe what kind of visualization methods have proven to be useful for different types of media, such as still images in popular science magazines, film contributions to TV shows, oral presentations, or interactive museum installations. Our primary approach is to adopt an egocentric point of view: The recipients of a visualization participate in a visually enriched thought experiment that allows them to experience or explore a relativistic scenario. In addition, we often combine egocentric visualizations with more abstract illustrations based on an outside view in order to provide several presentations of the same phenomenon. Although our visualization tools often build upon existing methods and implementations, the underlying techniques have been improved by several novel technical contributions like image-based special relativistic rendering on GPUs, special relativistic 4D ray tracing for accelerating scene objects, an extension of general relativistic ray tracing to manifolds described by multiple charts, GPU-based interactive visualization of gravitational light deflection, as well as planetary terrain rendering. The usefulness and effectiveness of our visualizations are demonstrated by reporting on experiences with, and feedback from, recipients of visualizations and collaborators.

  12. A comparison of ordinary fuzzy and intuitionistic fuzzy approaches in visualizing the image of flat electroencephalography

    NASA Astrophysics Data System (ADS)

    Zenian, Suzelawati; Ahmad, Tahir; Idris, Amidora

    2017-09-01

    Medical imaging is a subfield in image processing that deals with medical images. It is very crucial in visualizing the body parts in non-invasive way by using appropriate image processing techniques. Generally, image processing is used to enhance visual appearance of images for further interpretation. However, the pixel values of an image may not be precise as uncertainty arises within the gray values of an image due to several factors. In this paper, the input and output images of Flat Electroencephalography (fEEG) of an epileptic patient at varied time are presented. Furthermore, ordinary fuzzy and intuitionistic fuzzy approaches are implemented to the input images and the results are compared between these two approaches.

  13. Massively parallel neural circuits for stereoscopic color vision: encoding, decoding and identification.

    PubMed

    Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin

    2015-03-01

    Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. The effects of alphabet and expertise on letter perception

    PubMed Central

    Wiley, Robert W.; Wilson, Colin; Rapp, Brenda

    2016-01-01

    Long-standing questions in human perception concern the nature of the visual features that underlie letter recognition and the extent to which the visual processing of letters is affected by differences in alphabets and levels of viewer expertise. We examined these issues in a novel approach using a same-different judgment task on pairs of letters from the Arabic alphabet with two participant groups—one with no prior exposure to Arabic and one with reading proficiency. Hierarchical clustering and linear mixed-effects modeling of reaction times and accuracy provide evidence that both the specific characteristics of the alphabet and observers’ previous experience with it affect how letters are perceived and visually processed. The findings of this research further our understanding of the multiple factors that affect letter perception and support the view of a visual system that dynamically adjusts its weighting of visual features as expert readers come to more efficiently and effectively discriminate the letters of the specific alphabet they are viewing. PMID:26913778

  15. Threat captures attention but does not affect learning of contextual regularities.

    PubMed

    Yamaguchi, Motonori; Harwood, Sarah L

    2017-04-01

    Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.

  16. Comparison of visual survey and seining methods for estimating abundance of an endangered, benthic stream fish

    USGS Publications Warehouse

    Jordan, F.; Jelks, H.L.; Bortone, S.A.; Dorazio, R.M.

    2008-01-01

    We compared visual survey and seining methods for estimating abundance of endangered Okaloosa darters, Etheostoma okaloosae, in 12 replicate stream reaches during August 2001. For each 20-m stream reach, two divers systematically located and marked the position of darters and then a second crew of three to five people came through with a small-mesh seine and exhaustively sampled the same area. Visual surveys required little extra time to complete. Visual counts (24.2 ?? 12.0; mean ?? one SD) considerably exceeded seine captures (7.4 ?? 4.8), and counts from the two methods were uncorrelated. Visual surveys, but not seines, detected the presence of Okaloosa darters at one site with low population densities. In 2003, we performed a depletion removal study in 10 replicate stream reaches to assess the accuracy of the visual survey method. Visual surveys detected 59% of Okaloosa darters present, and visual counts and removal estimates were positively correlated. Taken together, our comparisons indicate that visual surveys more accurately and precisely estimate abundance of Okaloosa darters than seining and more reliably detect presence at low population densities. We recommend evaluation of visual survey methods when designing programs to monitor abundance of benthic fishes in clear streams, especially for threatened and endangered species that may be sensitive to handling and habitat disturbance. ?? 2007 Springer Science+Business Media, Inc.

  17. Automated Visual Inspection Of Integrated Circuits

    NASA Astrophysics Data System (ADS)

    Noppen, G.; Oosterlinck, Andre J.

    1989-07-01

    One of the major application fields of image processing techniques is the 'visual inspection'. For a number of rea-sons, the automated visual inspection of Integrated Circuits (IC's) has drawn a lot of attention. : Their very strict design makes them very suitable for an automated inspection. : There is already a lot of experience in the comparable Printed Circuit Board (PCB) and mask inspection. : The mechanical handling of wafers and dice is already an established technology. : Military and medical IC's should be a 100 % failproof. : IC inspection gives a high and allinost immediate payback. In this paper we wil try to give an outline of the problems involved in IC inspection, and the algorithms and methods used to overcome these problems. We will not go into de-tail, but we will try to give a general understanding. Our attention will go to the following topics. : An overview of the inspection process, with an emphasis on the second visual inspection. : The problems encountered in IC inspection, as opposed to the comparable PCB and mask inspection. : The image acquisition devices that can be used to obtain 'inspectable' images. : A general overview of the algorithms that can be used. : A short description of the algorithms developed at the ESAT-MI2 division of the katholieke Universiteit Leuven.

  18. Visually defining and querying consistent multi-granular clinical temporal abstractions.

    PubMed

    Combi, Carlo; Oliboni, Barbara

    2012-02-01

    The main goal of this work is to propose a framework for the visual specification and query of consistent multi-granular clinical temporal abstractions. We focus on the issue of querying patient clinical information by visually defining and composing temporal abstractions, i.e., high level patterns derived from several time-stamped raw data. In particular, we focus on the visual specification of consistent temporal abstractions with different granularities and on the visual composition of different temporal abstractions for querying clinical databases. Temporal abstractions on clinical data provide a concise and high-level description of temporal raw data, and a suitable way to support decision making. Granularities define partitions on the time line and allow one to represent time and, thus, temporal clinical information at different levels of detail, according to the requirements coming from the represented clinical domain. The visual representation of temporal information has been considered since several years in clinical domains. Proposed visualization techniques must be easy and quick to understand, and could benefit from visual metaphors that do not lead to ambiguous interpretations. Recently, physical metaphors such as strips, springs, weights, and wires have been proposed and evaluated on clinical users for the specification of temporal clinical abstractions. Visual approaches to boolean queries have been considered in the last years and confirmed that the visual support to the specification of complex boolean queries is both an important and difficult research topic. We propose and describe a visual language for the definition of temporal abstractions based on a set of intuitive metaphors (striped wall, plastered wall, brick wall), allowing the clinician to use different granularities. A new algorithm, underlying the visual language, allows the physician to specify only consistent abstractions, i.e., abstractions not containing contradictory conditions on the component abstractions. Moreover, we propose a visual query language where different temporal abstractions can be composed to build complex queries: temporal abstractions are visually connected through the usual logical connectives AND, OR, and NOT. The proposed visual language allows one to simply define temporal abstractions by using intuitive metaphors, and to specify temporal intervals related to abstractions by using different temporal granularities. The physician can interact with the designed and implemented tool by point-and-click selections, and can visually compose queries involving several temporal abstractions. The evaluation of the proposed granularity-related metaphors consisted in two parts: (i) solving 30 interpretation exercises by choosing the correct interpretation of a given screenshot representing a possible scenario, and (ii) solving a complex exercise, by visually specifying through the interface a scenario described only in natural language. The exercises were done by 13 subjects. The percentage of correct answers to the interpretation exercises were slightly different with respect to the considered metaphors (54.4--striped wall, 73.3--plastered wall, 61--brick wall, and 61--no wall), but post hoc statistical analysis on means confirmed that differences were not statistically significant. The result of the user's satisfaction questionnaire related to the evaluation of the proposed granularity-related metaphors ratified that there are no preferences for one of them. The evaluation of the proposed logical notation consisted in two parts: (i) solving five interpretation exercises provided by a screenshot representing a possible scenario and by three different possible interpretations, of which only one was correct, and (ii) solving five exercises, by visually defining through the interface a scenario described only in natural language. Exercises had an increasing difficulty. The evaluation involved a total of 31 subjects. Results related to this evaluation phase confirmed us about the soundness of the proposed solution even in comparison with a well known proposal based on a tabular query form (the only significant difference is that our proposal requires more time for the training phase: 21 min versus 14 min). In this work we have considered the issue of visually composing and querying temporal clinical patient data. In this context we have proposed a visual framework for the specification of consistent temporal abstractions with different granularities and for the visual composition of different temporal abstractions to build (possibly) complex queries on clinical databases. A new algorithm has been proposed to check the consistency of the specified granular abstraction. From the evaluation of the proposed metaphors and interfaces and from the comparison of the visual query language with a well known visual method for boolean queries, the soundness of the overall system has been confirmed; moreover, pros and cons and possible improvements emerged from the comparison of different visual metaphors and solutions. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Synchronization of spontaneous eyeblinks while viewing video stories

    PubMed Central

    Nakano, Tamami; Yamamoto, Yoshiharu; Kitajo, Keiichi; Takahashi, Toshimitsu; Kitazawa, Shigeru

    2009-01-01

    Blinks are generally suppressed during a task that requires visual attention and tend to occur immediately before or after the task when the timing of its onset and offset are explicitly given. During the viewing of video stories, blinks are expected to occur at explicit breaks such as scene changes. However, given that the scene length is unpredictable, there should also be appropriate timing for blinking within a scene to prevent temporal loss of critical visual information. Here, we show that spontaneous blinks were highly synchronized between and within subjects when they viewed the same short video stories, but were not explicitly tied to the scene breaks. Synchronized blinks occurred during scenes that required less attention such as at the conclusion of an action, during the absence of the main character, during a long shot and during repeated presentations of a similar scene. In contrast, blink synchronization was not observed when subjects viewed a background video or when they listened to a story read aloud. The results suggest that humans share a mechanism for controlling the timing of blinks that searches for an implicit timing that is appropriate to minimize the chance of losing critical information while viewing a stream of visual events. PMID:19640888

  20. Habitual wearers of colored lenses adapt more rapidly to the color changes the lenses produce.

    PubMed

    Engel, Stephen A; Wilkins, Arnold J; Mand, Shivraj; Helwig, Nathaniel E; Allen, Peter M

    2016-08-01

    The visual system continuously adapts to the environment, allowing it to perform optimally in a changing visual world. One large change occurs every time one takes off or puts on a pair of spectacles. It would be advantageous for the visual system to learn to adapt particularly rapidly to such large, commonly occurring events, but whether it can do so remains unknown. Here, we tested whether people who routinely wear spectacles with colored lenses increase how rapidly they adapt to the color shifts their lenses produce. Adaptation to a global color shift causes the appearance of a test color to change. We measured changes in the color that appeared "unique yellow", that is neither reddish nor greenish, as subjects donned and removed their spectacles. Nine habitual wearers and nine age-matched control subjects judged the color of a small monochromatic test light presented with a large, uniform, whitish surround every 5s. Red lenses shifted unique yellow to more reddish colors (longer wavelengths), and greenish lenses shifted it to more greenish colors (shorter wavelengths), consistent with adaptation "normalizing" the appearance of the world. In controls, the time course of this adaptation contained a large, rapid component and a smaller gradual one, in agreement with prior results. Critically, in habitual wearers the rapid component was significantly larger, and the gradual component significantly smaller than in controls. The total amount of adaptation was also larger in habitual wearers than in controls. These data suggest strongly that the visual system adapts with increasing rapidity and strength as environments are encountered repeatedly over time. An additional unexpected finding was that baseline unique yellow shifted in a direction opposite to that produced by the habitually worn lenses. Overall, our results represent one of the first formal reports that adjusting to putting on or taking off spectacles becomes easier over time, and may have important implications for clinical management. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Cortical Neural Synchronization Underlies Primary Visual Consciousness of Qualia: Evidence from Event-Related Potentials

    PubMed Central

    Babiloni, Claudio; Marzano, Nicola; Soricelli, Andrea; Cordone, Susanna; Millán-Calenti, José Carlos; Del Percio, Claudio; Buján, Ana

    2016-01-01

    This article reviews three experiments on event-related potentials (ERPs) testing the hypothesis that primary visual consciousness (stimulus self-report) is related to enhanced cortical neural synchronization as a function of stimulus features. ERP peak latency and sources were compared between “seen” trials and “not seen” trials, respectively related and unrelated to the primary visual consciousness. Three salient features of visual stimuli were considered (visuospatial, emotional face expression, and written words). Results showed the typical visual ERP components in both “seen” and “not seen” trials. There was no statistical difference in the ERP peak latencies between the “seen” and “not seen” trials, suggesting a similar timing of the cortical neural synchronization regardless the primary visual consciousness. In contrast, ERP sources showed differences between “seen” and “not seen” trials. For the visuospatial stimuli, the primary consciousness was related to higher activity in dorsal occipital and parietal sources at about 400 ms post-stimulus. For the emotional face expressions, there was greater activity in parietal and frontal sources at about 180 ms post-stimulus. For the written letters, there was higher activity in occipital, parietal and temporal sources at about 230 ms post-stimulus. These results hint that primary visual consciousness is associated with an enhanced cortical neural synchronization having entirely different spatiotemporal characteristics as a function of the features of the visual stimuli and possibly, the relative qualia (i.e., visuospatial, face expression, and words). In this framework, the dorsal visual stream may be synchronized in association with the primary consciousness of visuospatial and emotional face contents. Analogously, both dorsal and ventral visual streams may be synchronized in association with the primary consciousness of linguistic contents. In this line of reasoning, the ensemble of the cortical neural networks underpinning the single visual features would constitute a sort of multi-dimensional palette of colors, shapes, regions of the visual field, movements, emotional face expressions, and words. The synchronization of one or more of these cortical neural networks, each with its peculiar timing, would produce the primary consciousness of one or more of the visual features of the scene. PMID:27445750

  2. A Novel Approach to the Dissection of the Human Knee

    ERIC Educational Resources Information Center

    Clemente, F. Richard; Fabrizio, Philip A.; Shumaker, Michael

    2009-01-01

    The knee is one of the most frequently injured joints of the human body with injuries affecting the general population and the athletic population of many age groups. Dissection procedures for the knee joint typically do not allow unobstructed visualization of the anterior cruciate or posterior cruciate ligaments without sacrificing the collateral…

  3. Infant Information Processing in Relation to Six-Year Cognitive Outcomes.

    ERIC Educational Resources Information Center

    Rose, Susan A.; And Others

    1992-01-01

    Infants' visual recognition memory (VRM) at seven months was associated with their general intelligence, language proficiency, reading and quantitative skills, and perceptual organization at six years. Infants' VRM, object permanence, and cross-modal transfer of perceptions at one year were related to their IQ and several outcomes at six years.…

  4. 14 CFR 129.17 - Aircraft communication and navigation equipment for operations under IFR or over the top.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Aircraft communication and navigation....S.-REGISTERED AIRCRAFT ENGAGED IN COMMON CARRIAGE General § 129.17 Aircraft communication and... accuracy required for ATC; (ii) One marker beacon receiver providing visual and aural signals; and (iii...

  5. Genetic and Environmental Basis in Phenotype Correlation Between Physical Function and Cognition in Aging Chinese Twins.

    PubMed

    Xu, Chunsheng; Zhang, Dongfeng; Tian, Xiaocao; Wu, Yili; Pang, Zengchang; Li, Shuxia; Tan, Qihua

    2017-02-01

    Although the correlation between cognition and physical function has been well studied in the general population, the genetic and environmental nature of the correlation has been rarely investigated. We conducted a classical twin analysis on cognitive and physical function, including forced expiratory volume in one second (FEV1), forced vital capacity (FVC), handgrip strength, five-times-sit-to-stand test (FTSST), near visual acuity, and number of teeth lost in 379 complete twin pairs. Bivariate twin models were fitted to estimate the genetic and environmental correlation between physical and cognitive function. Bivariate analysis showed mildly positively genetic correlations between cognition and FEV1, r G = 0.23 [95% CI: 0.03, 0.62], as well as FVC, r G = 0.35 [95% CI: 0.06, 1.00]. We found that FTSST and cognition presented very high common environmental correlation, r C = -1.00 [95% CI: -1.00, -0.57], and low but significant unique environmental correlation, r E = -0.11 [95% CI: -0.22, -0.01], all in the negative direction. Meanwhile, near visual acuity and cognition also showed unique environmental correlation, r E = 0.16 [95% CI: 0.03, 0.27]. We found no significantly genetic correlation for cognition with handgrip strength, FTSST, near visual acuity, and number of teeth lost. Cognitive function was genetically related to pulmonary function. The FTSST and cognition shared almost the same common environmental factors but only part of the unique environmental factors, both with negative correlation. In contrast, near visual acuity and cognition may positively share part of the unique environmental factors.

  6. Longitudinal and Cross-Sectional Analyses of Visual Field Progression in Participants of the Ocular Hypertension Treatment Study (OHTS)

    PubMed Central

    Chauhan, Balwantray C; Keltner, John L; Cello, Kim E; Johnson, Chris A; Anderson, Douglas R; Gordon, Mae O; Kass, Michael A

    2014-01-01

    Purpose Visual field progression can be determined by evaluating the visual field by serial examinations (longitudinal analysis), or by a change in classification derived from comparison to age-matched normal data in single examinations (cross-sectional analysis). We determined the agreement between these two approaches in data from the Ocular Hypertension Treatment Study (OHTS). Methods Visual field data from 3088 eyes of 1570 OHTS participants (median follow-up 7 yrs, 15 tests with static automated perimetry) were analysed. Longitudinal analyses were performed with change probability with total and pattern deviation, and cross-sectional analysis with Glaucoma Hemifield Test, Corrected Pattern Standard Deviation, and Mean Deviation. The rates of Mean Deviation and General Height change were compared to estimate the degree of diffuse loss in emerging glaucoma. Results The agreement on progression in longitudinal and cross-sectional analyses ranged from 50% to 61% and remained nearly constant across a wide range of criteria. In contrast, the agreement on absence of progression ranged from 97% to 99.7%, being highest for the stricter criteria. Analyses of pattern deviation were more conservative than total deviation, with a 3 to 5 times lesser incidence of progression. Most participants developing field loss had both diffuse and focal change. Conclusions Despite considerable overall agreement, between 40 to 50% of eyes identified as having progressed with either longitudinal or cross-sectional analyses were identified with only one of the analyses. Because diffuse change is part of early glaucomatous damage, pattern deviation analyses may underestimate progression in patients with ocular hypertension. PMID:21149774

  7. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  8. Increasing motivation changes subjective reports of listening effort and choice of coping strategy.

    PubMed

    Picou, Erin M; Ricketts, Todd A

    2014-06-01

    The purpose of this project was to examine the effect of changing motivation on subjective ratings of listening effort and on the likelihood that a listener chooses either a controlling or an avoidance coping strategy. Two experiments were conducted, one with auditory-only (AO) and one with auditory-visual (AV) stimuli, both using the same speech recognition in noise materials. Four signal-to-noise ratios (SNRs) were used, two in each experiment. The two SNRs targeted 80% and 50% correct performance. Motivation was manipulated by either having participants listen carefully to the speech (low motivation), or listen carefully to the speech and then answer quiz questions about the speech (high motivation). Sixteen participants with normal hearing participated in each experiment. Eight randomly selected participants participated in both. Using AO and AV stimuli, motivation generally increased subjective ratings of listening effort and tiredness. In addition, using auditory-visual stimuli, motivation generally increased listeners' willingness to do something to improve the situation, and decreased their willingness to avoid the situation. These results suggest a listener's mental state may influence listening effort and choice of coping strategy.

  9. Visualization and Analysis of Climate Simulation Performance Data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Adamidis, Panagiotis; Behrens, Jörg

    2015-04-01

    Visualization is the key process of transforming abstract (scientific) data into a graphical representation, to aid in the understanding of the information hidden within the data. Climate simulation data sets are typically quite large, time varying, and consist of many different variables sampled on an underlying grid. A large variety of climate models - and sub models - exist to simulate various aspects of the climate system. Generally, one is mainly interested in the physical variables produced by the simulation runs, but model developers are also interested in performance data measured along with these simulations. Climate simulation models are carefully developed complex software systems, designed to run in parallel on large HPC systems. An important goal thereby is to utilize the entire hardware as efficiently as possible, that is, to distribute the workload as even as possible among the individual components. This is a very challenging task, and detailed performance data, such as timings, cache misses etc. have to be used to locate and understand performance problems in order to optimize the model implementation. Furthermore, the correlation of performance data to the processes of the application and the sub-domains of the decomposed underlying grid is vital when addressing communication and load imbalance issues. High resolution climate simulations are carried out on tens to hundreds of thousands of cores, thus yielding a vast amount of profiling data, which cannot be analyzed without appropriate visualization techniques. This PICO presentation displays and discusses the ICON simulation model, which is jointly developed by the Max Planck Institute for Meteorology and the German Weather Service and in partnership with DKRZ. The visualization and analysis of the models performance data allows us to optimize and fine tune the model, as well as to understand its execution on the HPC system. We show and discuss our workflow, as well as present new ideas and solutions that greatly aided our understanding. The software employed is based on Avizo Green, ParaView and SimVis, as well as own developed software extensions.

  10. Functional network connectivity underlying food processing: disturbed salience and visual processing in overweight and obese adults.

    PubMed

    Kullmann, Stephanie; Pape, Anna-Antonia; Heni, Martin; Ketterer, Caroline; Schick, Fritz; Häring, Hans-Ulrich; Fritsche, Andreas; Preissl, Hubert; Veit, Ralf

    2013-05-01

    In order to adequately explore the neurobiological basis of eating behavior of humans and their changes with body weight, interactions between brain areas or networks need to be investigated. In the current functional magnetic resonance imaging study, we examined the modulating effects of stimulus category (food vs. nonfood), caloric content of food, and body weight on the time course and functional connectivity of 5 brain networks by means of independent component analysis in healthy lean and overweight/obese adults. These functional networks included motor sensory, default-mode, extrastriate visual, temporal visual association, and salience networks. We found an extensive modulation elicited by food stimuli in the 2 visual and salience networks, with a dissociable pattern in the time course and functional connectivity between lean and overweight/obese subjects. Specifically, only in lean subjects, the temporal visual association network was modulated by the stimulus category and the salience network by caloric content, whereas overweight and obese subjects showed a generalized augmented response in the salience network. Furthermore, overweight/obese subjects showed changes in functional connectivity in networks important for object recognition, motivational salience, and executive control. These alterations could potentially lead to top-down deficiencies driving the overconsumption of food in the obese population.

  11. Predicting Visual Distraction Using Driving Performance Data

    PubMed Central

    Kircher, Katja; Ahlstrom, Christer

    2010-01-01

    Behavioral variables are often used as performance indicators (PIs) of visual or internal distraction induced by secondary tasks. The objective of this study is to investigate whether visual distraction can be predicted by driving performance PIs in a naturalistic setting. Visual distraction is here defined by a gaze based real-time distraction detection algorithm called AttenD. Seven drivers used an instrumented vehicle for one month each in a small scale field operational test. For each of the visual distraction events detected by AttenD, seven PIs such as steering wheel reversal rate and throttle hold were calculated. Corresponding data were also calculated for time periods during which the drivers were classified as attentive. For each PI, means between distracted and attentive states were calculated using t-tests for different time-window sizes (2 – 40 s), and the window width with the smallest resulting p-value was selected as optimal. Based on the optimized PIs, logistic regression was used to predict whether the drivers were attentive or distracted. The logistic regression resulted in predictions which were 76 % correct (sensitivity = 77 % and specificity = 76 %). The conclusion is that there is a relationship between behavioral variables and visual distraction, but the relationship is not strong enough to accurately predict visual driver distraction. Instead, behavioral PIs are probably best suited as complementary to eye tracking based algorithms in order to make them more accurate and robust. PMID:21050615

  12. Real-time computer-based visual feedback improves visual acuity in downbeat nystagmus - a pilot study.

    PubMed

    Teufel, Julian; Bardins, S; Spiegel, Rainer; Kremmyda, O; Schneider, E; Strupp, M; Kalla, R

    2016-01-04

    Patients with downbeat nystagmus syndrome suffer from oscillopsia, which leads to an unstable visual perception and therefore impaired visual acuity. The aim of this study was to use real-time computer-based visual feedback to compensate for the destabilizing slow phase eye movements. The patients were sitting in front of a computer screen with the head fixed on a chin rest. The eye movements were recorded by an eye tracking system (EyeSeeCam®). We tested the visual acuity with a fixed Landolt C (static) and during real-time feedback driven condition (dynamic) in gaze straight ahead and (20°) sideward gaze. In the dynamic condition, the Landolt C moved according to the slow phase eye velocity of the downbeat nystagmus. The Shapiro-Wilk test was used to test for normal distribution and one-way ANOVA for comparison. Ten patients with downbeat nystagmus were included in the study. Median age was 76 years and the median duration of symptoms was 6.3 years (SD +/- 3.1y). The mean slow phase velocity was moderate during gaze straight ahead (1.44°/s, SD +/- 1.18°/s) and increased significantly in sideward gaze (mean left 3.36°/s; right 3.58°/s). In gaze straight ahead, we found no difference between the static and feedback driven condition. In sideward gaze, visual acuity improved in five out of ten subjects during the feedback-driven condition (p = 0.043). This study provides proof of concept that non-invasive real-time computer-based visual feedback compensates for the SPV in DBN. Therefore, real-time visual feedback may be a promising aid for patients suffering from oscillopsia and impaired text reading on screen. Recent technological advances in the area of virtual reality displays might soon render this approach feasible in fully mobile settings.

  13. Inhibition of Return and Object-based Attentional Selection

    PubMed Central

    List, Alexandra; Robertson, Lynn C.

    2008-01-01

    Visual attention research has revealed that attentional allocation can occur in space- and/or object-based coordinates. Using the direct and elegant design of R. Egly, J. Driver and R. Rafal (1994), we examine whether space- and object-based inhibition of return (IOR) emerge under similar time courses. The present experiments were capable of isolating both space- and object-based effects induced by peripheral and back-to-center cues. They generally support the contention that spatially non-predictive cues are effective in producing space-based IOR at a variety of SOAs, and under a variety of stimulus conditions. Whether facilitatory or inhibitory in direction, the object-based effects occurred over a very different time course than did the space-based effects. Reliable object-based IOR was only found under limited conditions and was tied to the time since the most recent cue (peripheral or central). The finding that object-based effects are generally determined by SOA from the most recent cue may help to resolve discrepancies in the IOR literature. These findings also have implications for the search facilitator role IOR is purported to play in the guidance of visual attention. PMID:18085946

  14. Comparison Between Automatic and Visual Scorings of REM Sleep Without Atonia for the Diagnosis of REM Sleep Behavior Disorder in Parkinson Disease.

    PubMed

    Figorilli, Michela; Ferri, Raffaele; Zibetti, Maurizio; Beudin, Patricia; Puligheddu, Monica; Lopiano, Leonardo; Cicolin, Alessandro; Durif, Frank; Marques, Ana; Fantini, Maria Livia

    2017-02-01

    To compare three different methods, two visual and one automatic, for the quantification of rapid eye movement (REM) sleep without atonia (RSWA) in the diagnosis of REM sleep behavior disorder (RBD) in Parkinson's disease (PD) patients. Sixty-two consecutive patients with idiopathic PD underwent video-polysomnographic recording and showed more than 5 minutes of REM sleep. The electromyogram during REM sleep was analyzed by means of two visual methods (Montréal and SINBAR) and one automatic analysis (REM Atonia Index or RAI). RBD was diagnosed according to standard criteria and a series of diagnostic accuracy measures were calculated for each method, as well as the agreement between them. RBD was diagnosed in 59.7% of patients. The accuracy (85.5%), receiver operating characteristic (ROC) area (0.833) and Cohen's K coefficient (0.688) obtained with RAI were similar to those of the visual parameters. Visual tonic parameters, alone or in combination with phasic activity, showed high values of accuracy (93.5-95.2%), ROC area (0.92-0.94), and Cohen's K (0.862-0.933). Similarly, the agreement between the two visual methods was very high, and the agreement between each visual methods and RAI was substantial. Visual phasic measures alone performed worse than all the other measures. The diagnostic accuracy of RSWA obtained with both visual and automatic methods was high and there was a general agreement between methods. RAI may be used as the first line method to detect RSWA in the diagnosis of RBD in PD, together with the visual inspection of video-recorded behaviors, while the visual analysis of RSWA might be used in doubtful cases. © Sleep Research Society 2016. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  15. The Effects of Computer-Aided Antero-Posterior Forehead Movement on Ratings of Facial Attractiveness

    DTIC Science & Technology

    2015-06-01

    were then digitally manipulated at the soft tissue glabella to simulate forward movement by 2, 4, and 6mm and backward by 2mm. Twenty general dentists ...and twenty laypersons then scored the attractiveness of the photographs using a 0-100mm visual analogue scale. RESULTS: Dentists consistently...selected the original photographs without manipulation as one of the most attractive ones. Compared with laypersons, dentists could differentiate the

  16. Moving stimuli are less effectively masked using traditional continuous flash suppression (CFS) compared to a moving Mondrian mask (MMM): a test case for feature-selective suppression and retinotopic adaptation.

    PubMed

    Moors, Pieter; Wagemans, Johan; de-Wit, Lee

    2014-01-01

    Continuous flash suppression (CFS) is a powerful interocular suppression technique, which is often described as an effective means to reliably suppress stimuli from visual awareness. Suppression through CFS has been assumed to depend upon a reduction in (retinotopically specific) neural adaptation caused by the continual updating of the contents of the visual input to one eye. In this study, we started from the observation that suppressing a moving stimulus through CFS appeared to be more effective when using a mask that was actually more prone to retinotopically specific neural adaptation, but in which the properties of the mask were more similar to those of the to-be-suppressed stimulus. In two experiments, we find that using a moving Mondrian mask (i.e., one that includes motion) is more effective in suppressing a moving stimulus than a regular CFS mask. The observed pattern of results cannot be explained by a simple simulation that computes the degree of retinotopically specific neural adaptation over time, suggesting that this kind of neural adaptation does not play a large role in predicting the differences between conditions in this context. We also find some evidence consistent with the idea that the most effective CFS mask is the one that matches the properties (speed) of the suppressed stimulus. These results question the general importance of retinotopically specific neural adaptation in CFS, and potentially help to explain an implicit trend in the literature to adapt one's CFS mask to match one's to-be-suppressed stimuli. Finally, the results should help to guide the methodological development of future research where continuous suppression of moving stimuli is desired.

  17. Hot Electrons Regain Coherence in Semiconducting Nanowires

    NASA Astrophysics Data System (ADS)

    Reiner, Jonathan; Nayak, Abhay Kumar; Avraham, Nurit; Norris, Andrew; Yan, Binghai; Fulga, Ion Cosma; Kang, Jung-Hyun; Karzig, Toesten; Shtrikman, Hadas; Beidenkopf, Haim

    2017-04-01

    The higher the energy of a particle is above equilibrium, the faster it relaxes because of the growing phase space of available electronic states it can interact with. In the relaxation process, phase coherence is lost, thus limiting high-energy quantum control and manipulation. In one-dimensional systems, high relaxation rates are expected to destabilize electronic quasiparticles. Here, we show that the decoherence induced by relaxation of hot electrons in one-dimensional semiconducting nanowires evolves nonmonotonically with energy such that above a certain threshold hot electrons regain stability with increasing energy. We directly observe this phenomenon by visualizing, for the first time, the interference patterns of the quasi-one-dimensional electrons using scanning tunneling microscopy. We visualize the phase coherence length of the one-dimensional electrons, as well as their phase coherence time, captured by crystallographic Fabry-Pèrot resonators. A remarkable agreement with a theoretical model reveals that the nonmonotonic behavior is driven by the unique manner in which one-dimensional hot electrons interact with the cold electrons occupying the Fermi sea. This newly discovered relaxation profile suggests a high-energy regime for operating quantum applications that necessitate extended coherence or long thermalization times, and may stabilize electronic quasiparticles in one dimension.

  18. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task.

    PubMed

    Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary

    2013-01-16

    Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.

  19. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary

    2013-01-01

    Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time. PMID:23325347

  20. Toward unsupervised outbreak detection through visual perception of new patterns

    PubMed Central

    Lévy, Pierre P; Valleron, Alain-Jacques

    2009-01-01

    Background Statistical algorithms are routinely used to detect outbreaks of well-defined syndromes, such as influenza-like illness. These methods cannot be applied to the detection of emerging diseases for which no preexisting information is available. This paper presents a method aimed at facilitating the detection of outbreaks, when there is no a priori knowledge of the clinical presentation of cases. Methods The method uses a visual representation of the symptoms and diseases coded during a patient consultation according to the International Classification of Primary Care 2nd version (ICPC-2). The surveillance data are transformed into color-coded cells, ranging from white to red, reflecting the increasing frequency of observed signs. They are placed in a graphic reference frame mimicking body anatomy. Simple visual observation of color-change patterns over time, concerning a single code or a combination of codes, enables detection in the setting of interest. Results The method is demonstrated through retrospective analyses of two data sets: description of the patients referred to the hospital by their general practitioners (GPs) participating in the French Sentinel Network and description of patients directly consulting at a hospital emergency department (HED). Informative image color-change alert patterns emerged in both cases: the health consequences of the August 2003 heat wave were visualized with GPs' data (but passed unnoticed with conventional surveillance systems), and the flu epidemics, which are routinely detected by standard statistical techniques, were recognized visually with HED data. Conclusion Using human visual pattern-recognition capacities to detect the onset of unexpected health events implies a convenient image representation of epidemiological surveillance and well-trained "epidemiology watchers". Once these two conditions are met, one could imagine that the epidemiology watchers could signal epidemiological alerts, based on "image walls" presenting the local, regional and/or national surveillance patterns, with specialized field epidemiologists assigned to validate the signals detected. PMID:19515246

  1. Prospective, observational study comparing automated and visual point-of-care urinalysis in general practice

    PubMed Central

    van Delft, Sanne; Goedhart, Annelijn; Spigt, Mark; van Pinxteren, Bart; de Wit, Niek; Hopstaken, Rogier

    2016-01-01

    Objective Point-of-care testing (POCT) urinalysis might reduce errors in (subjective) reading, registration and communication of test results, and might also improve diagnostic outcome and optimise patient management. Evidence is lacking. In the present study, we have studied the analytical performance of automated urinalysis and visual urinalysis compared with a reference standard in routine general practice. Setting The study was performed in six general practitioner (GP) group practices in the Netherlands. Automated urinalysis was compared with visual urinalysis in these practices. Reference testing was performed in a primary care laboratory (Saltro, Utrecht, The Netherlands). Primary and secondary outcome measures Analytical performance of automated and visual urinalysis compared with the reference laboratory method was the primary outcome measure, analysed by calculating sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) and Cohen's κ coefficient for agreement. Secondary outcome measure was the user-friendliness of the POCT analyser. Results Automated urinalysis by experienced and routinely trained practice assistants in general practice performs as good as visual urinalysis for nitrite, leucocytes and erythrocytes. Agreement for nitrite is high for automated and visual urinalysis. κ's are 0.824 and 0.803 (ranked as very good and good, respectively). Agreement with the central laboratory reference standard for automated and visual urinalysis for leucocytes is rather poor (0.256 for POCT and 0.197 for visual, respectively, ranked as fair and poor). κ's for erythrocytes are higher: 0.517 (automated) and 0.416 (visual), both ranked as moderate. The Urisys 1100 analyser was easy to use and considered to be not prone to flaws. Conclusions Automated urinalysis performed as good as traditional visual urinalysis on reading of nitrite, leucocytes and erythrocytes in routine general practice. Implementation of automated urinalysis in general practice is justified as automation is expected to reduce human errors in patient identification and transcribing of results. PMID:27503860

  2. Prospective, observational study comparing automated and visual point-of-care urinalysis in general practice.

    PubMed

    van Delft, Sanne; Goedhart, Annelijn; Spigt, Mark; van Pinxteren, Bart; de Wit, Niek; Hopstaken, Rogier

    2016-08-08

    Point-of-care testing (POCT) urinalysis might reduce errors in (subjective) reading, registration and communication of test results, and might also improve diagnostic outcome and optimise patient management. Evidence is lacking. In the present study, we have studied the analytical performance of automated urinalysis and visual urinalysis compared with a reference standard in routine general practice. The study was performed in six general practitioner (GP) group practices in the Netherlands. Automated urinalysis was compared with visual urinalysis in these practices. Reference testing was performed in a primary care laboratory (Saltro, Utrecht, The Netherlands). Analytical performance of automated and visual urinalysis compared with the reference laboratory method was the primary outcome measure, analysed by calculating sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) and Cohen's κ coefficient for agreement. Secondary outcome measure was the user-friendliness of the POCT analyser. Automated urinalysis by experienced and routinely trained practice assistants in general practice performs as good as visual urinalysis for nitrite, leucocytes and erythrocytes. Agreement for nitrite is high for automated and visual urinalysis. κ's are 0.824 and 0.803 (ranked as very good and good, respectively). Agreement with the central laboratory reference standard for automated and visual urinalysis for leucocytes is rather poor (0.256 for POCT and 0.197 for visual, respectively, ranked as fair and poor). κ's for erythrocytes are higher: 0.517 (automated) and 0.416 (visual), both ranked as moderate. The Urisys 1100 analyser was easy to use and considered to be not prone to flaws. Automated urinalysis performed as good as traditional visual urinalysis on reading of nitrite, leucocytes and erythrocytes in routine general practice. Implementation of automated urinalysis in general practice is justified as automation is expected to reduce human errors in patient identification and transcribing of results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. Learning GIS and exploring geolocated data with the all-in-one Geolokit toolbox for Google Earth

    NASA Astrophysics Data System (ADS)

    Watlet, A.; Triantafyllou, A.; Bastin, C.

    2016-12-01

    GIS software are today's essential tools to gather and visualize geological data, to apply spatial and temporal analysis and finally, to create and share interactive maps for further investigations in geosciences. Such skills are especially essential to learn for students who go through fieldtrips, samples collections or field experiments. However, time is generally missing to teach in detail all the aspects of visualizing geolocated geoscientific data. For these purposes, we developed Geolokit: a lightweight freeware dedicated to geodata visualization and written in Python, a high-level, cross-platform programming language. Geolokit software is accessible through a graphical user interface, designed to run in parallel with Google Earth, benefitting from the numerous interactive capabilities. It is designed as a very user-friendly toolbox that allows `geo-users' to import their raw data (e.g. GPS, sample locations, structural data, field pictures, maps), to use fast data analysis tools and to visualize these into the Google Earth environment using KML code; with no require of third party software, except Google Earth itself. Geolokit comes with a large number of geosciences labels, symbols, colours and placemarks and is applicable to display several types of geolocated data, including: Multi-points datasets Automatically computed contours of multi-points datasets via several interpolation methods Discrete planar and linear structural geology data in 2D or 3D supporting large range of structures input format Clustered stereonets and rose diagrams 2D cross-sections as vertical sections Georeferenced maps and grids with user defined coordinates Field pictures using either geo-tracking metadata from a camera built-in GPS module, or the same-day track of an external GPS In the end, Geolokit is helpful for quickly visualizing and exploring data without losing too much time in the numerous capabilities of GIS software suites. We are looking for students and teachers to discover all the functionalities of Geolokit. As this project is under development and planned to be open source, we are definitely looking to discussions regarding particular needs or ideas, and to contributions in the Geolokit project.

  4. Algebraic Reasoning in Solving Mathematical Problem Based on Learning Style

    NASA Astrophysics Data System (ADS)

    Indraswari, N. F.; Budayasa, I. K.; Ekawati, R.

    2018-01-01

    This study aimed to describe algebraic reasoning of secondary school’s pupils with different learning styles in solving mathematical problem. This study begins by giving the questionnaire to find out the learning styles and followed by mathematical ability test to get three subjects of 8th-grade whereas the learning styles of each pupil is visual, auditory, kinesthetic and had similar mathematical abilities. Then it continued with given algebraic problems and interviews. The data is validated using triangulation of time. The result showed that in the pattern of seeking indicator, subjects identified the things that were known and asked based on them observations. The visual and kinesthetic learners represented the known information in a chart, whereas the auditory learner in a table. In addition, they found the elements which makes the pattern and made a relationship between two quantities. In the pattern recognition indicator, they created conjectures on the relationship between two quantities and proved it. In the generalization indicator, they were determining the general rule of pattern found on each element of pattern using algebraic symbols and created a mathematical model. Visual and kinesthetic learners determined the general rule of equations which was used to solve problems using algebraic symbols, but auditory learner in a sentence.

  5. Looking to Learn: The Effects of Visual Guidance on Observational Learning of the Golf Swing.

    PubMed

    D'Innocenzo, Giorgia; Gonzalez, Claudia C; Williams, A Mark; Bishop, Daniel T

    2016-01-01

    Skilled performers exhibit more efficient gaze patterns than less-skilled counterparts do and they look more frequently at task-relevant regions than at superfluous ones. We examine whether we may guide novices' gaze towards relevant regions during action observation in order to facilitate their learning of a complex motor skill. In a Pre-test-Post-test examination of changes in their execution of the full golf swing, 21 novices viewed one of three videos at intervention: i) a skilled golfer performing 10 swings (Free Viewing, FV); ii) the same video with transient colour cues superimposed to highlight key features of the setup (Visual Guidance; VG); iii) or a History of Golf video (Control). Participants in the visual guidance group spent significantly more time looking at cued areas than did the other two groups, a phenomenon that persisted after the cues had been removed. Moreover, the visual guidance group improved their swing execution at Post-test and on a Retention test one week later. Our results suggest that visual guidance to cued areas during observational learning of complex motor skills may accelerate acquisition of the skill.

  6. Looking to Learn: The Effects of Visual Guidance on Observational Learning of the Golf Swing

    PubMed Central

    Gonzalez, Claudia C.; Williams, A. Mark

    2016-01-01

    Skilled performers exhibit more efficient gaze patterns than less-skilled counterparts do and they look more frequently at task-relevant regions than at superfluous ones. We examine whether we may guide novices’ gaze towards relevant regions during action observation in order to facilitate their learning of a complex motor skill. In a Pre-test-Post-test examination of changes in their execution of the full golf swing, 21 novices viewed one of three videos at intervention: i) a skilled golfer performing 10 swings (Free Viewing, FV); ii) the same video with transient colour cues superimposed to highlight key features of the setup (Visual Guidance; VG); iii) or a History of Golf video (Control). Participants in the visual guidance group spent significantly more time looking at cued areas than did the other two groups, a phenomenon that persisted after the cues had been removed. Moreover, the visual guidance group improved their swing execution at Post-test and on a Retention test one week later. Our results suggest that visual guidance to cued areas during observational learning of complex motor skills may accelerate acquisition of the skill. PMID:27224057

  7. WELDSMART: A vision-based expert system for quality control

    NASA Technical Reports Server (NTRS)

    Andersen, Kristinn; Barnett, Robert Joel; Springfield, James F.; Cook, George E.

    1992-01-01

    This work was aimed at exploring means for utilizing computer technology in quality inspection and evaluation. Inspection of metallic welds was selected as the main application for this development and primary emphasis was placed on visual inspection, as opposed to other inspection methods, such as radiographic techniques. Emphasis was placed on methodologies with the potential for use in real-time quality control systems. Because quality evaluation is somewhat subjective, despite various efforts to classify discontinuities and standardize inspection methods, the task of using a computer for both inspection and evaluation was not trivial. The work started out with a review of the various inspection techniques that are used for quality control in welding. Among other observations from this review was the finding that most weld defects result in abnormalities that may be seen by visual inspection. This supports the approach of emphasizing visual inspection for this work. Quality control consists of two phases: (1) identification of weld discontinuities (some of which may be severe enough to be classified as defects), and (2) assessment or evaluation of the weld based on the observed discontinuities. Usually the latter phase results in a pass/fail judgement for the inspected piece. It is the conclusion of this work that the first of the above tasks, identification of discontinuities, is the most challenging one. It calls for sophisticated image processing and image analysis techniques, and frequently ad hoc methods have to be developed to identify specific features in the weld image. The difficulty of this task is generally not due to limited computing power. In most cases it was found that a modest personal computer or workstation could carry out most computations in a reasonably short time period. Rather, the algorithms and methods necessary for identifying weld discontinuities were in some cases limited. The fact that specific techniques were finally developed and successfully demosntrated to work illustrates that the general approach taken here appears to be promising for commercial development of computerized quality inspection systems. Inspection based on these techniques may be used to supplement or substitute more elaborate inspection methods, such as x-ray inspections.

  8. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.

    PubMed

    Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J

    2013-01-01

    Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).

  9. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete

    PubMed Central

    Jarick, Michelle; Stewart, Mark T.; Smilek, Daniel; Dixon, Michael J.

    2013-01-01

    Time-space synaesthetes “see” time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred “auditory” viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the “preferred” auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009). PMID:24137140

  10. A new measure for the assessment of visual awareness in individuals with tunnel vision.

    PubMed

    AlSaqr, Ali M; Dickinson, Chris M

    2017-01-01

    Individuals with a restricted peripheral visual field or tunnel vision (TV) have problems moving about and avoiding obstacles. Some individuals adapt better than others and some use assistive optical aids, so measurement of the visual field is not sufficient to describe their performance. In the present study, we developed a new clinical test called the 'Assessment of Visual Awareness (AVA)', which can be used to measure detection of peripheral targets. The participants were 20 patients with TV due to retinitis pigmentosa (PTV) and 50 normally sighted participants with simulated tunnel vision (STV) using goggles. In the AVA test, detection times were measured, when subjects searched for 24 individually presented, one degree targets, randomly positioned in a 60 degrees noise background. Head and eye movements were allowed and the presentation time was unlimited. The test validity was investigated by correlating the detection times with the 'percentage of preferred walking speed' (PPWS) and the 'number of collisions' on an indoor mobility course. In PTV and STV, the detection times had significant negative correlation with the field of view. The detection times had significant positive relations with target location. In the STV, the detection time was significantly negatively correlated with the PPWS and significantly positively correlated with the collisions score on the indoor mobility course. In the PTV, the relationship was not statistically significant. No significant difference in performance of STV was found when repeating the test one to two weeks later. The proposed AVA test was sensitive to the field of view and target location. The test is unique in design, quick, simple to deliver and both repeatable and valid. It could be a valuable tool to test different rehabilitation strategies in patients with TV. © 2016 Optometry Australia.

  11. Resource-sharing between internal maintenance and external selection modulates attentional capture by working memory content.

    PubMed

    Kiyonaga, Anastasia; Egner, Tobias

    2014-01-01

    It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter-and facilitation by a matching target-were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed.

  12. Resource-sharing between internal maintenance and external selection modulates attentional capture by working memory content

    PubMed Central

    Kiyonaga, Anastasia; Egner, Tobias

    2014-01-01

    It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter—and facilitation by a matching target—were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed. PMID:25221499

  13. Using augmented reality to teach and learn biochemistry.

    PubMed

    Vega Garzón, Juan Carlos; Magrini, Marcio Luiz; Galembeck, Eduardo

    2017-09-01

    Understanding metabolism and metabolic pathways constitutes one of the central aims for students of biological sciences. Learning metabolic pathways should be focused on the understanding of general concepts and core principles. New technologies such Augmented Reality (AR) have shown potential to improve assimilation of biochemistry abstract concepts because students can manipulate 3D molecules in real time. Here we describe an application named Augmented Reality Metabolic Pathways (ARMET), which allowed students to visualize the 3D molecular structure of substrates and products, thus perceiving changes in each molecule. The structural modification of molecules shows students the flow and exchange of compounds and energy through metabolism. © 2017 by The International Union of Biochemistry and Molecular Biology, 45(5):417-420, 2017. © 2017 The International Union of Biochemistry and Molecular Biology.

  14. Visualizing Spatially Varying Distribution Data

    NASA Technical Reports Server (NTRS)

    Kao, David; Luo, Alison; Dungan, Jennifer L.; Pang, Alex; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Box plot is a compact representation that encodes the minimum, maximum, mean, median, and quarters information of a distribution. In practice, a single box plot is drawn for each variable of interest. With the advent of more accessible computing power, we are now facing the problem of visual icing data where there is a distribution at each 2D spatial location. Simply extending the box plot technique to distributions over 2D domain is not straightforward. One challenge is reducing the visual clutter if a box plot is drawn over each grid location in the 2D domain. This paper presents and discusses two general approaches, using parametric statistics and shape descriptors, to present 2D distribution data sets. Both approaches provide additional insights compared to the traditional box plot technique

  15. Visualization of Pulsar Search Data

    NASA Astrophysics Data System (ADS)

    Foster, R. S.; Wolszczan, A.

    1993-05-01

    The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.

  16. Projection-type see-through holographic three-dimensional display

    NASA Astrophysics Data System (ADS)

    Wakunami, Koki; Hsieh, Po-Yuan; Oi, Ryutaro; Senoh, Takanori; Sasaki, Hisayuki; Ichihashi, Yasuyuki; Okui, Makoto; Huang, Yi-Pai; Yamamoto, Kenji

    2016-10-01

    Owing to the limited spatio-temporal resolution of display devices, dynamic holographic three-dimensional displays suffer from a critical trade-off between the display size and the visual angle. Here we show a projection-type holographic three-dimensional display, in which a digitally designed holographic optical element and a digital holographic projection technique are combined to increase both factors at the same time. In the experiment, the enlarged holographic image, which is twice as large as the original display device, projected on the screen of the digitally designed holographic optical element was concentrated at the target observation area so as to increase the visual angle, which is six times as large as that for a general holographic display. Because the display size and the visual angle can be designed independently, the proposed system will accelerate the adoption of holographic three-dimensional displays in industrial applications, such as digital signage, in-car head-up displays, smart-glasses and head-mounted displays.

  17. Real-time simulation of large-scale neural architectures for visual features computation based on GPU.

    PubMed

    Chessa, Manuela; Bianchi, Valentina; Zampetti, Massimo; Sabatini, Silvio P; Solari, Fabio

    2012-01-01

    The intrinsic parallelism of visual neural architectures based on distributed hierarchical layers is well suited to be implemented on the multi-core architectures of modern graphics cards. The design strategies that allow us to optimally take advantage of such parallelism, in order to efficiently map on GPU the hierarchy of layers and the canonical neural computations, are proposed. Specifically, the advantages of a cortical map-like representation of the data are exploited. Moreover, a GPU implementation of a novel neural architecture for the computation of binocular disparity from stereo image pairs, based on populations of binocular energy neurons, is presented. The implemented neural model achieves good performances in terms of reliability of the disparity estimates and a near real-time execution speed, thus demonstrating the effectiveness of the devised design strategies. The proposed approach is valid in general, since the neural building blocks we implemented are a common basis for the modeling of visual neural functionalities.

  18. Usefulness of real-time three-dimensional ultrasonography in percutaneous nephrostomy: an animal study.

    PubMed

    Hongzhang, Hong; Xiaojuan, Qin; Shengwei, Zhang; Feixiang, Xiang; Yujie, Xu; Haibing, Xiao; Gallina, Kazobinka; Wen, Ju; Fuqing, Zeng; Xiaoping, Zhang; Mingyue, Ding; Huageng, Liang; Xuming, Zhang

    2018-05-17

    To evaluate the effect of real-time three-dimensional (3D) ultrasonography (US) in guiding percutaneous nephrostomy (PCN). A hydronephrosis model was devised in which the ureters of 16 beagles were obstructed. The beagles were divided equally into groups 1 and 2. In group 1, the PCN was performed using real-time 3D US guidance, while in group 2 the PCN was guided using two-dimensional (2D) US. Visualization of the needle tract, length of puncture time and number of puncture times were recorded for the two groups. In group 1, score for visualization of the needle tract, length of puncture time and number of puncture times were 3, 7.3 ± 3.1 s and one time, respectively. In group 2, the respective results were 1.4 ± 0.5, 21.4 ± 5.8 s and 2.1 ± 0.6 times. The visualization of needle tract in group 1 was superior to that in group 2, and length of puncture time and number of puncture times were both lower in group 1 than in group 2. Real-time 3D US-guided PCN is superior to 2D US-guided PCN in terms of visualization of needle tract and the targeted pelvicalyceal system, leading to quick puncture. Real-time 3D US-guided puncture of the kidney holds great promise for clinical implementation in PCN. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.

  19. Neural representation of form-contingent color filling-in in the early visual cortex.

    PubMed

    Hong, Sang Wook; Tong, Frank

    2017-11-01

    Perceptual filling-in exemplifies the constructive nature of visual processing. Color, a prominent surface property of visual objects, can appear to spread to neighboring areas that lack any color. We investigated cortical responses to a color filling-in illusion that effectively dissociates perceived color from the retinal input (van Lier, Vergeer, & Anstis, 2009). Observers adapted to a star-shaped stimulus with alternating red- and cyan-colored points to elicit a complementary afterimage. By presenting an achromatic outline that enclosed one of the two afterimage colors, perceptual filling-in of that color was induced in the unadapted central region. Visual cortical activity was monitored with fMRI, and analyzed using multivariate pattern analysis. Activity patterns in early visual areas (V1-V4) reliably distinguished between the two color-induced filled-in conditions, but only higher extrastriate visual areas showed the predicted correspondence with color perception. Activity patterns allowed for reliable generalization between filled-in colors and physical presentations of perceptually matched colors in areas V3 and V4, but not in earlier visual areas. These findings suggest that the perception of filled-in surface color likely requires more extensive processing by extrastriate visual areas, in order for the neural representation of surface color to become aligned with perceptually matched real colors.

  20. An introduction to Space Weather Integrated Modeling

    NASA Astrophysics Data System (ADS)

    Zhong, D.; Feng, X.

    2012-12-01

    The need for a software toolkit that integrates space weather models and data is one of many challenges we are facing with when applying the models to space weather forecasting. To meet this challenge, we have developed Space Weather Integrated Modeling (SWIM) that is capable of analysis and visualizations of the results from a diverse set of space weather models. SWIM has a modular design and is written in Python, by using NumPy, matplotlib, and the Visualization ToolKit (VTK). SWIM provides data management module to read a variety of spacecraft data products and a specific data format of Solar-Interplanetary Conservation Element/Solution Element MHD model (SIP-CESE MHD model) for the study of solar-terrestrial phenomena. Data analysis, visualization and graphic user interface modules are also presented in a user-friendly way to run the integrated models and visualize the 2-D and 3-D data sets interactively. With these tools we can locally or remotely analysis the model result rapidly, such as extraction of data on specific location in time-sequence data sets, plotting interplanetary magnetic field lines, multi-slicing of solar wind speed, volume rendering of solar wind density, animation of time-sequence data sets, comparing between model result and observational data. To speed-up the analysis, an in-situ visualization interface is used to support visualizing the data 'on-the-fly'. We also modified some critical time-consuming analysis and visualization methods with the aid of GPU and multi-core CPU. We have used this tool to visualize the data of SIP-CESE MHD model in real time, and integrated the Database Model of shock arrival, Shock Propagation Model, Dst forecasting model and SIP-CESE MHD model developed by SIGMA Weather Group at State Key Laboratory of Space Weather/CAS.

Top